Andrej Karpathy coined the term in early 2025: you describe what you want, the AI writes the code, you review and guide rather than type. He called it "vibe coding" because you're operating at the level of vibes and intent, not syntax and implementation details.
The term went viral. Half the developer community called it the future. The other half called it a way to ship insecure garbage faster.
Both sides have a point. Here's an honest take aimed at Indian developers in 2026 — not a hype piece, not a dismissal.
What vibe coding actually is
The term gets misused constantly, so let's be precise.
Vibe coding is not: "the AI writes all your code and you copy-paste it."
It's more accurate to describe it as: high-level intent → AI generates → developer reviews, steers, and corrects. The skill shift is from writing syntax to specifying architecture and reviewing output.
Where it works brilliantly:
- Boilerplate and scaffolding — CRUD endpoints, database migrations, form validation — anything where the pattern is known and the implementation is mechanical
- Test generation — unit tests, integration tests, test data factories
- Data transformation — parsing, mapping, converting between formats
- Documentation — docstrings, README sections, API documentation from code
Where it breaks down:
- Complex business logic with subtle rules — anything where the "what" is hard to specify precisely
- Security-critical code — auth flows, payment processing, encryption
- Performance-sensitive hot paths — AI-generated code is usually correct but not optimized
- Code interfacing with Indian regulatory systems — UPI edge cases, GST rounding rules, RBI compliance — the AI doesn't know these well
The developers getting the most value from vibe coding understand this distinction. They use it aggressively for the first category and carefully (or not at all) for the others.
The Indian developer's version of vibe coding
Most Indian developers work in one of three contexts: large IT services (TCS, Infosys, Wipro), Indian product startups, or freelancing. Vibe coding changes the economics of all three in different ways.
For IT services developers, the constraint isn't usually how fast you can code — it's how fast the client approves requirements and how fast your team's review process moves. Vibe coding helps, but the bottleneck isn't what you're optimizing.
For Indian startup developers, vibe coding is the biggest individual productivity lever available right now. A 2-day feature estimate becomes a 4-hour feature. That's not hypothetical — I've seen it repeatedly, and so have most engineers who've adopted it properly.
The pricing question comes up immediately among freelancers: if I can build features 3× faster, should I charge 3× less? No. You're charging for the outcome, not the hours. If you can deliver the same quality in less time, that's your competitive advantage — not a reason to lower your rate. The developers winning at freelancing in 2026 are the ones who've internalized this.
Tools for vibe coding in 2026
India-accessible options with actual INR costs:
| Tool | What it is | Monthly cost (₹) | UPI payment? |
|---|---|---|---|
| Claude Code via AICredits.in | Terminal AI agent, repo-level | ~₹300–2,500 (usage) | Yes — GPay/UPI |
| Cursor Pro | VS Code fork, best autocomplete | ~₹1,680 | No (USD card) |
| Windsurf Pro | VS Code fork, Cascade agent | ~₹1,260 | No (USD card) |
| GitHub Copilot | IDE autocomplete + chat | ~₹840 | No (USD card) |
| Windsurf Free | Limited credits, Cascade agent | ₹0 | N/A |
The UPI payment column matters more than it looks. Paying $20/month with an Indian debit card often fails or adds forex charges. AICredits.in solves this specifically — add credits via GPay, use Claude Code without an international card.
For most Indian developers doing this personally (not on company card), the practical starting point is: Windsurf free tier for daily IDE work + AICredits.in Claude Code for the heavy lifting.
How to actually do it well
The golden rule: CLAUDE.md is your architecture document
Before you write a single prompt to generate code, write a CLAUDE.md file. This is the markdown file that Claude Code reads at the start of every session. It's your project's constitution — stack, conventions, what to avoid.
Here's a CLAUDE.md for a typical Indian startup project:
# Project Context
## What this is
A B2B SaaS platform for Indian SMEs. Multi-tenant. Payments via Razorpay.
Users are Indian business owners — prioritize mobile-responsive UI.
## Stack
- Backend: FastAPI (Python 3.12), PostgreSQL 16, Redis, Celery
- Frontend: Next.js 14 (App Router), TypeScript, Tailwind CSS
- Auth: JWT via fastapi-users
- Payments: Razorpay (not Stripe)
- Deployment: AWS EC2, Nginx, PM2
## Code conventions
- Python: type hints everywhere, Pydantic v2 for validation
- Async: all DB operations are async (SQLAlchemy async)
- Tests: pytest, coverage > 80% for new code
- API responses: always use response_model, never return raw dicts
- Error handling: use HTTPException with specific status codes, include error_code field
- Indian context: amounts always in paise (not rupees), dates in IST
## What NOT to do
- Do not use Stripe — we use Razorpay
- Do not hardcode INR amounts — always use paise integers
- Do not use synchronous SQLAlchemy
- Do not add print statements — use structlog
- Do not create new API endpoints without corresponding pytest tests
The quality of your CLAUDE.md directly determines the quality of vibe coding output. I've watched developers spend 30 minutes writing a thorough CLAUDE.md and then generate 400 lines of near-production-ready code on the first try. I've also watched developers skip it and spend the next hour fixing hallucinated function names and wrong library choices.
Work in small, reviewable chunks
The worst vibe coding sessions I've seen start with: "Build me a complete user authentication system."
The best start with: "Add a POST /auth/register endpoint that accepts email and password, validates both, creates a User record with hashed password (use bcrypt), returns 201 with user ID."
One function at a time. Review it before asking for the next one. The "build, check, guide" loop:
- Give precise spec for one function/endpoint/component
- Review output: does it match your conventions? Any security issues? Correct types?
- Accept, fix, or reject
- Move to the next piece
This isn't slower than "build everything at once" — it's faster, because you're not debugging 400 lines of output where the error is somewhere in the middle.
Learn to write good specs, not good code
This is the real skill shift. The developer who thrives in 2026 can write precise functional specifications. Not code — specifications.
Compare these two prompts for the same feature:
Vague: "Add rate limiting to the API"
Precise: "Add rate limiting to all authenticated endpoints using slowapi and Redis as the backend. Limit: 100 requests per 15 minutes per user ID (not per IP — users may share office IPs). Return 429 with JSON body {\"error\": \"rate_limit_exceeded\", \"retry_after_seconds\": N}. Exclude the /health and /auth/refresh endpoints. Write pytest tests covering: under-limit request (200), at-limit request (200), over-limit request (429), retry-after header correctness."
The second prompt produces usable code on the first pass. The first produces something you'll rewrite three times.
Writing good specs is a learnable skill. It's also directly useful for working with human developers and writing engineering documentation.
What vibe coding can't replace
Be honest with yourself about where AI-assisted coding isn't appropriate.
Debugging complex production issues. Reading logs, hypothesizing about race conditions, reasoning about distributed system behavior under load — this is still deep human work. AI tools can help with specific lookups but won't replace the intuition built from years of debugging production systems.
System design for scale. "Design the data model for a multi-tenant SaaS with 10,000 tenants and 1M events/day" requires judgment about your specific constraints. AI can enumerate options; you have to choose.
Security review. Never ship AI-generated authentication code, authorization logic, or payment processing code without a human security review. This isn't being conservative — AI models make subtle security mistakes that look correct syntactically. An OWASP-aware code review is not optional for auth and payment flows.
Code interacting with Indian regulatory systems. UPI transaction limits, NACH mandate rules, GST rounding logic, RBI reporting requirements — the models don't know these well enough to be reliable. Always validate against the actual RBI/NPCI documentation.
💡 Want to go deeper? The prompting patterns that make vibe coding reliable — chain-of-thought, constrained generation, structured output — are covered in our Advanced track.
A real 2-hour vibe coding session walkthrough
This is a recent session. Starting point: "I need a FastAPI app with JWT auth and a Postgres user table."
Setup (10 minutes): Wrote the CLAUDE.md first — FastAPI, SQLAlchemy async, alembic for migrations, pytest, UPI payments context.
Step 1 — Database model (15 minutes):
Create a User SQLAlchemy model with:
- id: UUID primary key (server-generated)
- email: unique, indexed, not null
- password_hash: string, not null
- is_active: bool, default True
- created_at: datetime with timezone (IST)
- last_login: datetime with timezone, nullable
Include Pydantic schemas: UserCreate (email + password), UserResponse (no password_hash)
Output: 60 lines. I changed one thing: the created_at default was datetime.utcnow() — I asked it to change to func.now() at the database level. One iteration.
Step 2 — Alembic migration (5 minutes):
Generate an Alembic migration for the User model above.
Output: Correct migration. Zero changes needed.
Step 3 — Auth endpoints (25 minutes):
Create POST /auth/register and POST /auth/login endpoints.
Register: accepts UserCreate, validates email format, checks uniqueness, hashes password with bcrypt (12 rounds), creates user, returns UserResponse with 201.
Login: accepts email + password, returns JWT token (24h expiry) and UserResponse. Use python-jose for JWT.
Include: proper HTTPException for duplicate email (409), wrong credentials (401), inactive account (403).
Output: 120 lines. I checked the JWT secret handling — it was reading from an env variable correctly. The bcrypt rounds were 12 as specified. The error messages were slightly generic; I asked it to include an error_code field in the response. One iteration.
Step 4 — Auth middleware (20 minutes):
Create a get_current_user dependency that:
- Reads Bearer token from Authorization header
- Decodes JWT with python-jose
- Fetches user from DB
- Raises 401 if token invalid, expired, or user not found
- Raises 403 if user.is_active is False
Return: User ORM object
Output: 50 lines. Correct on first pass.
Step 5 — Tests (45 minutes):
Write pytest tests for all auth endpoints above.
Cover: successful registration, duplicate email, invalid email format, successful login, wrong password, inactive user, valid token middleware, expired token, missing token.
Use pytest-asyncio, httpx AsyncClient, a test database (SQLite for tests), and factory_boy for user creation.
Output: 150 lines. Three tests needed fixes — the factory wasn't hashing passwords before inserting (would fail login tests). I pointed this out; it fixed all three in one shot.
Total code output: ~380 lines of production-quality FastAPI code. Code I wrote manually: ~30 lines (mostly CLAUDE.md, minor fixes). Time: 2 hours including setup, review, and the back-and-forth.
That's the actual experience. Not magic — I had to know what to ask for, review the output carefully, and catch the factory_boy issue. But significantly faster than writing it from scratch.
Next steps
- How to write a CLAUDE.md for any project — the single most important setup step
- Cursor AI prompt engineering guide — if you prefer IDE-integrated vibe coding
- Agentic prompting — the underlying techniques that make AI coding tools work
- Cursor vs Claude Code vs Windsurf comparison — which tool to pay for as an Indian developer



