Prompt engineering guides are full of clean, academic examples. "Ask the AI to summarize this document." "Request a JSON output." Sure. Helpful. Also not what I actually type.
Real prompting is messier. You're in the middle of work, you need something specific, and you need it now. Over the past couple years, I've built up a small collection of templates that I actually use — the ones I've refined to the point where they reliably give me something useful on the first try.
I'm handing them to you. Adapt them. They're a starting point, not a formula.
1. The "Make Me Sound Smarter" Rewrite
What it does: Takes rough, brain-dump writing and turns it into clean, confident prose without changing your meaning or voice.
Rewrite the following text. Keep my exact meaning and opinions — don't add anything I haven't said. Improve clarity, flow, and sentence structure. Match the tone of the original (conversational, not formal). Cut anything redundant.
Text:
[paste your draft]
Why it works: The key phrase is "don't add anything I haven't said." Without that, models love to "enhance" your writing with generic content you didn't want. This keeps it anchored to your actual ideas.
I use this for emails, Slack messages, blog posts, and anything where I know what I want to say but can't quite get the words right.
2. The Devil's Advocate
What it does: Steelman the case against your own idea.
I'm about to [describe the decision or action]. I think it's a good idea because [your reasons].
Play devil's advocate. What are the strongest arguments against this? What am I not seeing? What could go wrong that I might be dismissing? Be direct — I'm not looking for validation.
Why it works: AI models have a strong bias toward agreeing with you. This prompt explicitly overrides that. The phrase "I'm not looking for validation" is load-bearing — without something like it, you'll get a polite "here are some considerations" response that softballs everything.
I run this before major decisions. Sometimes it confirms I'm on the right track. Sometimes it surfaces something I'd actually missed.
3. The Recursive Summary
What it does: Summarizes long documents in progressively shorter layers.
Read the following document and give me three versions of a summary:
1. One-paragraph summary (5–7 sentences) covering the main point, key supporting arguments, and conclusion.
2. Three-bullet TL;DR — the only things I absolutely need to know.
3. One sentence: the single most important takeaway.
Document:
[paste content]
Why it works: Sometimes you want depth, sometimes you want a quick scan, sometimes you need a one-liner to relay to someone else. Getting all three at once is more useful than asking for each separately, and the model tends to be more consistent when it can think through them in sequence.
4. The Email That Doesn't Sound Like AI Wrote It
What it does: Drafts professional emails that don't have that telltale AI stuffiness.
Write an email from me to [recipient] about [topic].
Context: [what's the situation — 2–3 sentences]
What I want to accomplish: [specific goal — "get a meeting", "decline politely", "follow up without being annoying", etc.]
Tone: [e.g., "professional but warm", "direct and brief", "firm without being rude"]
Constraints:
- No "I hope this email finds you well" or similar openers
- No "Please don't hesitate to reach out" closers
- Under [X] words
- Sign off as [your name]
Why it works: The constraints section is the secret. Without them, you'll get a textbook professional email with all the clichés included. By explicitly naming the phrases you hate, you route around them.
Also: specifying "what I want to accomplish" rather than just "what to say" tends to produce sharper, more purposeful emails.
5. The Thought-Partner Brainstorm
What it does: Generates ideas without the usual watered-down list of generic suggestions.
I need ideas for [the thing you're brainstorming].
Context: [relevant background — who this is for, what constraints exist, what's been tried already]
Give me 10 ideas. For each one:
- State the idea in one sentence
- Explain why it might work
- Name one risk or downside
Don't filter out unconventional ideas. I'd rather see 3 weird ones and 7 solid ones than 10 safe ones.
Why it works: Two things: asking for a downside forces the model to think more critically rather than just pitching things. And explicitly inviting unconventional ideas gives it permission to break out of the obvious territory.
The number "10" matters too. Ask for 3 and you'll get 3 safe options. Ask for 10 and the model has to dig deeper for the later ones — that's often where the interesting ideas live.
6. The Code Explainer (For Non-Coders)
What it does: Explains code in plain English, not developer-speak.
Explain the following code to me. I'm not a developer — I understand business logic and can follow step-by-step explanations, but I don't know programming syntax.
Explain:
1. What this code does overall (in one sentence)
2. What it does step by step
3. What would break if this code were removed or changed
4. Any important terms, defined simply
Code:
[paste code]
Why it works: "I'm not a developer" doesn't mean "explain it like I'm five." Adding "I can follow step-by-step explanations" tells the model to be thorough, just not technical. The "what would break" question is especially useful for understanding code dependencies.
7. The Research Briefing
What it does: Turns a topic into a structured briefing you can actually use.
Give me a structured briefing on [topic]. I know the basics but want to understand it more deeply.
Format:
**The one-sentence definition** (no jargon)
**Why it matters** (practical implications, 2–3 sentences)
**The main debate or tension** (what do experts disagree about?)
**Key terms I should know** (5 max, defined briefly)
**What most people get wrong** about this topic
**Where to go deeper** (what to search for if I want more)
Why it works: Most "explain X to me" prompts produce Wikipedia-style summaries. This template gets at the interesting parts — the debates, the misconceptions, the angles that don't show up in a surface-level overview.
The "what people get wrong" section in particular usually surfaces something genuinely useful.
8. The Decision Framework
What it does: Helps you think through a decision without getting vague philosophical advice.
I'm deciding between [Option A] and [Option B].
My situation: [2–3 sentences of relevant context]
What I'm optimizing for: [your actual goal — speed, cost, quality, etc.]
My constraints: [time, money, resources, etc.]
I don't want a pros/cons list. Instead:
1. Which option would you choose, and why? Be specific.
2. What information would change your recommendation?
3. What am I probably not thinking about?
Why it works: Asking for a recommendation instead of a pros/cons list forces the model to take a position. That's more useful. The "what would change your recommendation" question is underrated — it helps you identify what data would actually move the needle, rather than spinning in circles.
9. The "Explain Why This Isn't Working" Debugger
What it does: Not just for code — helps diagnose why anything isn't performing the way you expected.
Here is [a piece of writing / a campaign / a process / code]:
[paste it]
Here is what I expected to happen: [describe expected outcome]
Here is what's actually happening: [describe the reality]
Don't just tell me what's wrong. Explain why it's wrong — what's the root cause, not just the symptom. Then tell me the one change that would have the biggest impact.
Why it works: The "why" instruction separates this from just asking for a review. And constraining to one highest-impact change prevents the model from giving you a laundry list of ten things to fix, which usually results in you fixing nothing.
10. The Meeting Prep Briefing
What it does: Prepares you for a meeting or conversation with someone you don't know well.
I have a [meeting type] with [name/role] at [company/context].
What I know about them: [LinkedIn summary, recent news, things they've written, etc.]
What I want to get out of this meeting: [specific goal]
What they probably want from me: [your guess]
Give me:
1. Three things I should know going into this conversation
2. Two questions I should ask them
3. One thing I should avoid saying or doing
Why it works: It's a forcing function. Most people walk into meetings under-prepared because prep feels vague. This template makes prep concrete — you end up with specific things to say, ask, and avoid.
A Note on All of These
None of these are final versions. I've tweaked most of them dozens of times based on what actually came back. The best prompt is the one you've iterated on long enough that you know exactly what it's going to produce.
Start with one. Run it a few times. Notice where it breaks. Adjust. That's the whole game.
If you want to understand why prompts like these work at a deeper level, the Intermediate Track breaks down the mechanics — few-shot examples, chain-of-thought, constraints, and how to structure complex instructions.
