41% of global code is now AI-generated. That number will keep going up. But most of the prompts driving that code are terrible — vague, underspecified, context-free. They produce plausible-looking code that breaks in production, tests that don't test anything, and refactors that introduce new bugs.
The prompts below are tested. I've run each one (or a close variant) on real projects in Cursor, Claude Code, or GitHub Copilot Chat. They work because they follow a consistent pattern: context + specific task + constraints + output format. Every prompt that's missing one of those elements is leaving quality on the table.
A note on tool-specific syntax: prompts marked (Cursor) use @ file mentions — replace with your actual file names. Prompts marked (Claude Code) are written for terminal/CLI context. Unmarked prompts work across all tools.
Section 1: Starting a new feature (prompts 1–10)
The most common mistake when starting a feature: jumping straight to implementation. These prompts force planning first.
Prompt 1 — Scope before coding
Before writing any code: I want to add [feature name] to this codebase.
Looking at @[relevant files], outline:
1. Which existing files need to change
2. Which new files need to be created
3. Any existing patterns I should follow
4. Any risks or dependencies I should know about
Wait for my approval before writing any code.
Why it works: Forces the model to read your existing code before proposing a solution. Prevents it from writing a feature in a style that doesn't match your codebase.
Prompt 2 — Architecture decision
I'm adding a [caching layer / queue system / notification system] to this app.
Given @[relevant config files] and @[relevant service files], which approach fits better:
Option A: [describe option]
Option B: [describe option]
Consider: existing patterns, maintenance burden, and the constraint that [key constraint].
Give me a recommendation with one-paragraph reasoning. No code yet.
Why it works: Bounds the decision space so you get a recommendation instead of a "it depends" essay.
Prompt 3 — API contract first
Before implementation, define the API contract for [feature].
Specify:
- Endpoint(s): method, path, request body, response body
- Error cases and their HTTP status codes
- Any auth/permission requirements
Output as a TypeScript interface or a short OpenAPI-style spec. No implementation yet.
Why it works: Aligns you and the model on exactly what's being built before any code exists. Catches misunderstandings early.
Prompt 4 — Data model before logic
I need to store [data concept] for [feature].
Looking at @[schema file], design the database schema additions:
- New tables or columns needed
- Relationships to existing tables
- Indexes that should be added and why
- Any migration risks
Output as SQL or Drizzle/Prisma schema syntax. No application code yet.
Why it works: Data model mistakes are expensive to fix later. Front-loading this catches them before the feature is half-built.
Prompt 5 — Feature flag scaffolding
I want to ship [feature] behind a feature flag so I can enable it per-user.
Looking at @[config or feature-flag file if it exists]:
- If a feature flag system already exists: add [feature-name] flag following the existing pattern
- If none exists: propose a minimal feature flag system (< 50 lines) and implement [feature-name] as the first flag
The flag should be checkable in both server and client code.
Why it works: Gets you a complete flag implementation without over-engineering.
Prompt 6 — Greenfield feature implementation
Implement [feature name].
Context: @[relevant existing files]
What to build:
- [specific requirement 1]
- [specific requirement 2]
- [specific requirement 3]
Constraints:
- Follow the patterns in @[reference file]
- Do not install new dependencies — use what's in package.json
- Types must be strict — no 'any'
Start with the data layer, then the service layer, then the API/controller layer. Show me each layer before moving to the next.
Why it works: Layered implementation with review checkpoints. You catch mistakes at the data layer before the model builds two more layers on top of them.
Prompt 7 — Integration with existing service
Add [new capability] to the existing @[service file].
Rules:
- Add to the existing class/module — do not create a new file
- Follow the error handling pattern already used in this file
- Any new config values should go through the existing config system (@[config file])
- Write the new method(s) with full JSDoc
Output only the new method(s) plus any new imports needed.
Why it works: Explicit constraint to extend rather than replace prevents the model from rewriting the entire file.
Prompt 8 — Scaffolding a new module
Create a new [module name] module in [directory].
Follow the same structure as @[reference module directory]:
- One file per concern (types, service, controller, tests)
- Same export patterns
- Same error handling approach
For now, create the files with the structure and types, but stub out the implementations. I'll fill in the logic.
Why it works: Gets consistent structure without requiring full implementation upfront.
Prompt 9 — Third-party library integration
I want to integrate [library name] (version [X.Y.Z]) into this project.
Current setup: @[relevant config or entry files]
Tasks:
1. Install the library (show me the install command, don't run it)
2. Initialize it following their latest docs patterns
3. Create a wrapper/helper in @[appropriate directory] that our app will use — we should never import from [library] directly except in this wrapper
Do not change any existing files except to add the initialization call where appropriate.
Why it works: The wrapper constraint prevents library lock-in and makes future swaps easier.
Prompt 10 — Post-feature cleanup
The [feature] implementation is done. Do a cleanup pass on the files I've modified:
@[file 1], @[file 2], @[file 3]
Check for:
- Dead code or commented-out blocks
- console.log or debug statements
- TODO comments that were addressed but not removed
- Inconsistent naming
- Any 'any' types that can be replaced with proper types
List the issues you find. Fix only things that are clearly cleanup — don't change logic.
Why it works: Models are good at this kind of static analysis and will catch things code review often misses.
Section 2: Debugging (prompts 11–20)
The key to good debugging prompts: give the model the error, the relevant code, and the context of what you expected to happen.
Prompt 11 — Error explanation
I'm getting this error:
[paste full error with stack trace]
The error occurs when [describe the action that triggers it].
Looking at @[relevant file]: explain what's causing this error. Don't fix it yet — I want to understand the root cause first.
Why it works: "Explain before fix" prevents the model from applying a surface fix that doesn't address the real problem.
Prompt 12 — Reproduction isolation
This bug is hard to reproduce reliably. Here's what I know:
- It happens when: [conditions]
- It doesn't happen when: [counter-conditions]
- The relevant code: @[file]
Help me write an isolated test case that reliably reproduces this bug. The test should fail right now and pass after the bug is fixed.
Why it works: A reproduction test is often more valuable than the fix — it validates the fix and prevents regression.
Prompt 13 — Fix with explanation
Here's the bug:
Error: [error message]
Occurs in: @[file], around line [N]
Expected behavior: [what should happen]
Actual behavior: [what's happening]
Fix the bug. After the fix, explain in 2 sentences: (1) what caused it, (2) why your fix resolves it.
Why it works: The explanation requirement forces the model to reason about the fix rather than pattern-matching to a superficial change.
Prompt 14 — Type error resolution
TypeScript is giving me this error:
[paste TS error]
In @[file], line [N]: [paste the problematic code]
Fix the type error. Do not use 'any' or 'as unknown as X' unless there's genuinely no typed alternative. If there's no clean typed solution, explain why and what the tradeoff is.
Why it works: The constraint against any prevents the lazy fix that satisfies the compiler but destroys type safety.
Prompt 15 — Performance problem
This function is slow — it takes [X ms] on [dataset size] inputs:
[paste function]
Identify the performance bottleneck. Then give me two options:
1. A quick fix that improves performance with minimal code change
2. A deeper optimization that's more work but gets better results
For each option: estimated improvement, tradeoffs, and implementation.
Why it works: Two-option framing forces the model to think about the tradeoff space, not just the first fix it sees.
Prompt 16 — Race condition or async bug
I have an intermittent bug that seems related to async timing:
[describe the symptom]
Here's the relevant async code:
@[file], around line [N]
Identify potential race conditions or async ordering issues. Show me what could go wrong and how to fix it. Be specific about which operations might execute out of order.
Why it works: Explicitly calling it an async bug focuses the model on timing issues rather than logic errors.
Prompt 17 — Regression hunting (Claude Code)
A regression was introduced between commit [A] and commit [B].
Symptom: [describe what broke]
Read the git diff between these commits and identify which change most likely caused this regression. Explain your reasoning.
Why it works: Claude Code can read git diffs and reason about them. Faster than reading the diff yourself on a large changeset.
Prompt 18 — Cross-browser or environment bug
This works in [environment A] but fails in [environment B]:
[describe the failure]
Looking at @[relevant file]: what's the likely cause of this environment-specific behavior? What should I check to confirm, and what's the fix?
Why it works: Environment-specific bugs need environment-specific hypotheses. The model often knows common cross-browser or cross-environment differences.
Prompt 19 — Third-party library issue
I'm using [library] version [X.Y.Z] and getting unexpected behavior:
[describe the behavior]
Here's how I'm using it:
[paste your usage code]
Is this a known issue with this library version? Is there a workaround? Check if there's a bug in my usage first before assuming it's a library issue.
Why it works: "Check my usage first" redirects the model away from blaming the library before ruling out user error.
Prompt 20 — Post-fix regression check
I just fixed [bug] by making this change:
[paste the change]
Looking at @[affected files]: what existing functionality might this change break? Give me a list of specific scenarios to test manually and specific test cases to add.
Why it works: Catching regression surface area right after a fix, while the change is fresh, is much cheaper than catching it in production.
Section 3: Refactoring (prompts 21–30)
Small targeted refactors, not wholesale rewrites. The single most important principle in AI-assisted refactoring.
Prompt 21 — Extract a function
In @[file], lines [N–M]: this code block does [describe what it does].
Extract it into a function called [function name]. The function should:
- Accept [parameters] as arguments
- Return [return type]
- Be placed [above/below the calling code / in @separate-file.ts]
Do not change any other code in the file.
Why it works: Precise scope and placement instructions prevent the model from reorganizing things you didn't ask it to touch.
Prompt 22 — Reduce duplication
I have similar code in multiple places:
@[file 1], lines [N–M]
@[file 2], lines [N–M]
Extract the shared logic into a utility function in @[target file].
Then update both call sites to use the utility.
Show me the utility function, then show me the updated call sites.
Why it works: Identifying the duplication explicitly — rather than asking it to "find duplication" — produces a focused, correct extraction.
Prompt 23 — Simplify a complex condition
This conditional in @[file] is hard to read:
[paste the condition]
Refactor it to be more readable. Options:
- Extract sub-conditions into named boolean variables
- Reorder clauses for logical clarity
- Add comments explaining non-obvious logic
Don't change behavior — same inputs should produce same outputs. Show me a quick test to confirm behavior is preserved.
Why it works: The "don't change behavior" constraint and the test requirement keep the refactor safe.
Prompt 24 — Convert to async/await
This function uses callbacks/promises chains that are hard to follow:
@[file], function [name]
Convert it to async/await syntax. Preserve:
- Identical error handling behavior
- The same function signature (except adding async keyword)
- All existing comments
Do not change any other code in the file.
Why it works: Explicit preservation requirements prevent behavior changes sneaking in with the syntax change.
Prompt 25 — Split a large file
@[file] has grown too large ([N] lines). It currently handles [describe multiple concerns].
Propose a split: which concerns should go into which new files?
Show me the proposed file structure before moving anything.
The public API of [file] should remain identical — callers should not need to change their imports.
Why it works: Planning the split before execution prevents the common problem of breaking imports partway through.
Prompt 26 — Replace magic numbers/strings with constants
In @[file]: identify all magic numbers and strings that should be named constants.
For each one:
- Propose a constant name (follow SCREAMING_SNAKE_CASE)
- Show where to define it (same file, or a constants file?)
- Show all the places it's used
Do this as a list first. I'll confirm before you make changes.
Why it works: Review before execution for a refactor that touches many lines.
Prompt 27 — Add TypeScript types to untyped code
@[file] was written in JavaScript and converted to .ts without adding real types — it's full of 'any'.
Add proper TypeScript types. Rules:
- No 'any' — use 'unknown' if you genuinely can't determine the type
- Define interfaces/types at the top of the file or in a separate @[types file]
- Do not change any logic — type additions only
Why it works: Clear scope ("types only") prevents the model from refactoring logic while it's in the file.
Prompt 28 — Normalize error handling
Looking at @[file]: error handling is inconsistent. Some functions throw, some return null, some return {error: string}.
Propose a consistent error handling pattern for this file. Then refactor all the functions to use it.
Show me the pattern before making changes.
Why it works: Pattern-first approach produces a coherent result instead of multiple half-fixed functions.
Prompt 29 — Performance refactor
This code runs too slowly on large inputs:
@[file], function [name]
Refactor for performance. Constraints:
- Must produce identical output for all valid inputs
- Time complexity should improve from [current] to [target] if possible
- If you're making a tradeoff (e.g., more memory for less time), explain it
Show me the refactored version and explain the improvement.
Why it works: Specifying the complexity target focuses the model on the right class of solution.
Prompt 30 — Dependency injection refactor
This class/module has hard-coded dependencies that make it hard to test:
@[file]
Refactor to use dependency injection:
- Dependencies should be passed in (constructor, function parameters, or factory function — choose what fits the existing pattern)
- Keep the public API the same
- Show me the updated class and a quick example of how to instantiate it in tests with mocked dependencies
Why it works: DI refactors are high-value for testability and the model handles them well with the explicit pattern constraint.
Section 4: Writing tests (prompts 31–40)
Tests written by AI models are often superficial — they test the happy path and ignore edge cases. These prompts compensate.
Prompt 31 — Unit test scaffold
Write unit tests for @[file], function [name].
Cover:
1. Happy path — [describe expected normal behavior]
2. Edge cases: [list specific edge cases]
3. Error cases: [list specific error conditions]
Use [Jest / Vitest / your framework]. Mock any external dependencies. Each test should have a descriptive name that explains what it's testing, not just "works correctly".
Why it works: Explicit edge case and error case requirements prevent the model from writing 10 variations of the happy path.
Prompt 32 — Test an untested function
@[file]: the function [name] has no tests. Before writing tests, analyze it:
1. What does it do?
2. What are the inputs and valid ranges?
3. What could go wrong?
Then write tests covering what you identified. Include at least one test that should fail right now if the function has a bug.
Why it works: The analysis step makes the model reason about the function rather than write tests mechanically.
Prompt 33 — Integration test
Write an integration test for the [feature] flow:
Start state: [describe the database/state before the test]
Action: [describe what the test does — HTTP request, function call, etc.]
Expected result: [describe what should be true after]
Use [your testing framework]. Use real database operations (not mocks) but use the test database. Clean up after the test.
Why it works: Integration tests need real state. Specifying the start state and cleanup requirement prevents flaky tests.
Prompt 34 — API endpoint test
Write tests for the [METHOD] [/path] endpoint in @[route file].
Test:
1. Successful request with valid auth and valid body — expect [status code] and [response shape]
2. Missing auth — expect 401
3. Invalid body — expect 422 with error details
4. [Any business logic edge cases specific to this endpoint]
Use [supertest / your HTTP testing library]. Use the test database.
Why it works: Auth and validation cases are the ones most often missing from AI-generated endpoint tests.
Prompt 35 — Test coverage gap analysis (Claude Code)
Look at @[file] and its test file @[test file].
Identify: which functions, branches, and error cases in [file] are not covered by the existing tests?
Output as a prioritized list — highest value missing tests first.
Then write the top 5 missing tests.
Why it works: Gap analysis produces focused tests rather than redundant tests for already-covered paths.
Prompt 36 — Snapshot test review
@[snapshot file] has snapshot tests that haven't been reviewed in a while.
Looking at @[component file]: do the current snapshots actually capture meaningful behavior, or are they just capturing markup that could change for trivial reasons?
For any snapshots that are testing the wrong thing, propose what they should test instead (e.g., specific prop rendering, conditional display, accessibility attributes).
Why it works: Snapshot tests are often cargo-culted. This forces an audit before writing more of the same.
Prompt 37 — Property-based test
Write a property-based test for @[file], function [name].
The function: [brief description]
Key property to test: for any valid input, [property that should always hold].
Use [fast-check / your property testing library]. Define the input generator and the property assertion.
Why it works: Property-based testing is underused and AI models know how to write them if you ask explicitly.
Prompt 38 — Test data factory
I need test data factories for @[schema/types file].
Create factory functions for: [Type1], [Type2], [Type3]
Each factory should:
- Accept optional overrides for any field
- Generate sensible defaults for all required fields
- Use realistic fake data (not "test", "foo", "123")
Place them in @[test helpers directory].
Why it works: Good test factories with realistic defaults make tests more readable and more likely to catch real bugs.
Prompt 39 — Flaky test diagnosis
This test fails intermittently — roughly 1 in 10 runs:
[paste the test]
What could make this test non-deterministic? Check for:
- Time-dependent assertions
- Race conditions in async setup/teardown
- External dependencies (network, filesystem, random)
- Test ordering dependencies
Propose a fix for each issue you find.
Why it works: Flaky test causes are predictable. The checklist format ensures systematic coverage.
Prompt 40 — Test for a bug fix
I just fixed this bug: [describe the bug]
The fix is in @[file].
Write a regression test that:
1. Would have failed before the fix
2. Passes after the fix
3. Is named to describe the specific bug scenario
The test should live in @[test file].
Why it works: Every bug fix deserves a regression test. This prompt makes writing it automatic.
Section 5: Code review and documentation (prompts 41–50)
Prompt 41 — Pre-PR self-review
I'm about to open a PR for these changes:
@[modified files]
Do a pre-PR review. Flag:
1. Logic errors or incorrect assumptions
2. Missing error handling
3. Security issues (injection, auth bypass, data exposure)
4. Performance problems
5. Missing tests for new code
Be specific — line numbers and file names. Skip style nitpicks.
Why it works: The specificity requirement prevents vague feedback. Line numbers make issues actionable.
Prompt 42 — Security review
Review @[file] for security issues. Focus on:
- Input validation and sanitization
- SQL/NoSQL injection risk
- Authentication and authorization gaps
- Sensitive data in logs or responses
- Dependency vulnerabilities you can identify from the imports
For each issue: severity (high/medium/low), description, and recommended fix.
Why it works: Security reviews need structured output. This format makes triage easy.
Prompt 43 — Code smell detection
Look at @[file] and identify code smells:
- Functions that are too long (> ~40 lines)
- Functions that do too many things
- Deep nesting (> 3 levels)
- Repeated code that should be extracted
- Non-obvious variable names
Output as a list with specific line numbers. Don't fix anything — just identify.
Why it works: Identification-only mode gets a clean list without the model rewriting code you didn't ask it to touch.
Prompt 44 — JSDoc / docstring generation
Add JSDoc comments to all exported functions in @[file].
For each function, document:
- @description — what it does (one sentence)
- @param — each parameter with type and description
- @returns — the return value and type
- @throws — any errors that can be thrown, and under what conditions
- @example — a short usage example
Do not change any code — documentation additions only.
Why it works: The @throws and @example requirements produce documentation that's actually useful, not just repeating the function name.
Prompt 45 — README section for a new feature
I've added [feature name] to this project.
Looking at the implementation in @[relevant files]:
Write a README section covering:
1. What it does (1 paragraph)
2. How to configure it (with example config)
3. How to use it (with a code example)
4. Any gotchas or limitations
Match the tone and style of @[README.md].
Why it works: The style match instruction produces documentation that doesn't look like it was written by a different author.
Prompt 46 — Architecture decision record
We decided to [architectural decision — e.g., use Redis for caching instead of in-memory].
Write an Architecture Decision Record (ADR) for this decision.
Include:
- Context: what problem we were solving
- Decision: what we chose
- Alternatives considered: at least 2, with brief tradeoffs
- Consequences: what this decision means for future development
Keep it concise — under 300 words. Format as Markdown.
Why it works: ADRs are high-value documentation that almost nobody writes. This makes it easy.
Prompt 47 — Changelog entry
Here are the changes in this release:
@[git diff or commit list]
Write a changelog entry in the style of @[CHANGELOG.md].
Group changes by: Breaking Changes, New Features, Bug Fixes, Performance, Documentation.
Use plain language — not commit messages, but user-facing descriptions.
Why it works: Changelogs written from git diffs are more accurate than changelogs written from memory.
Prompt 48 — Inline comment improvement
Look at @[file]: the inline comments are either missing or not helpful (they describe what the code does, not why).
For the non-obvious parts of this code, add or replace comments that explain:
- Why this approach was chosen (not what it does)
- Any non-obvious edge case being handled
- Any external constraint or business rule driving the logic
Remove comments that just restate what the code does.
Why it works: The "why not what" framing is the most important principle in code commenting, and the model needs to be told it explicitly.
Prompt 49 — PR description
I'm opening a PR with these changes:
@[modified files list or brief description of changes]
Write a PR description with:
- Summary: what this PR does in 2-3 sentences
- Changes: a bulleted list of specific changes
- Testing: what tests exist and how to manually verify
- Screenshots: [if UI change] describe what screenshots to attach
Keep it under 300 words. Skip obvious filler like "This PR implements...".
Why it works: Good PR descriptions speed up review. This produces ones that reviewers actually read.
Prompt 50 — Technical debt log entry
I just merged a quick fix that I know isn't the right long-term solution:
[describe the fix and why it's a shortcut]
Write a technical debt log entry for @[TECH_DEBT.md or similar]:
- What we did and why
- What the ideal solution looks like
- What triggers should prompt us to fix this (e.g., "when X reaches N users", "before we open the API publicly")
- Estimated effort to fix it properly
This should take 5 minutes to read in 6 months and make the decision clear.
Why it works: Technical debt that's documented with context and trigger conditions actually gets paid down. Debt that's just "TODO: fix this" doesn't.
Adapting these prompts to your tools
In Cursor: Use @ mentions to inject file context. For the Agent mode prompts (particularly in sections 1 and 3), open Agent mode explicitly rather than the chat panel. The .cursorrules file can hold project-level constraints so you don't have to include them in every prompt.
In Claude Code: The codebase exploration prompts (17, 35) work especially well because Claude Code can read arbitrary files without you explicitly @ mentioning them. For multi-file tasks, let it plan first by adding "list the files you'll touch before making any changes."
In Copilot Chat: The scoped, single-file prompts work best. Copilot Chat doesn't have deep agentic execution, so stick to prompts 21–30 (single-file refactors) and the explanation-focused prompts (11, 13, 15).
The pattern that makes all of these work: context + specific task + constraints + output format. Any prompt missing one of those elements is improvable. The single biggest upgrade most developers can make is adding constraints — especially "do not change X" constraints that protect code you didn't ask the model to touch.
For more on the vibe coding workflow and how to structure AI-assisted development sessions, see the vibe coding prompting guide. For tool-specific guidance on Cursor specifically, see the Cursor AI prompt engineering guide.



