Every few weeks another "ultimate prompt engineering guide" lands with 20+ techniques, a dozen acronyms, and enough jargon to make your head spin. I've read most of them. I've also spent three years using AI daily for real work — writing, coding, research, data pipelines — and I can tell you honestly: five techniques handle 90% of what actually matters.
The rest are either situational edge cases or just recombinations of these five with different names. Learn these well and you'll outperform most people who've "studied" prompt engineering for months.
Technique 1: Role + task framing
Before you describe what you want, tell the model who it's supposed to be and what kind of task is coming. This isn't magic — it's context-setting. Models generate tokens based on probability distributions, and "act as a senior copywriter" meaningfully shifts what the model considers likely next tokens.
The pattern:
You are a [role] with expertise in [domain]. Your job is to [task].
Compare these two prompts for the same task:
Without role framing:
"Write a product description for a standing desk."
With role framing:
"You are a direct-response copywriter specializing in B2B office furniture. Write a product description for a standing desk targeted at software engineers who have recurring back pain. Lead with the problem, not the product."
The second version gives the model enough context to make real choices — which pain points to lead with, what tone to use, what audience assumptions to make. You're not hoping it guesses right; you're telling it.
One thing I've learned: be specific about the role's expertise area, not just the job title. "Marketing expert" is weak. "Content marketer who writes for developer audiences" is strong. The specificity constrains the output space toward what you actually want.
Pair this with the role prompting techniques in our beginner track if you want a deeper treatment.
Technique 2: Few-shot examples
"Show don't tell" is the most reliable prompt improvement I know. When you describe what you want in words, the model has to interpret your description. When you show it an example, it pattern-matches directly.
Few-shot prompting means providing 2–5 input/output examples before the real task:
Here are examples of the format I want:
Input: "Product launch email for a B2B SaaS tool"
Output: Subject: [specific benefit], not a teaser headline
Body: Problem in line 1. Solution in line 2. One CTA only.
Input: "Webinar follow-up email"
Output: Subject: References what happened, not what's next
Body: Thanks, recap, next step.
Now write: [your actual task]
The examples do work that paragraphs of description can't. They establish rhythm, format, vocabulary, and implicit rules the model picks up without you having to spell them out.
When you don't have examples of your own, you can generate them in a first pass:
- Ask the model for 3 example outputs given a rough brief
- Edit them to what you actually want
- Use those edited examples as few-shot inputs in your real prompt
This self-bootstrapping trick saves a lot of time. The few-shot prompting lesson walks through it in detail.
Technique 3: Output format specification
The single highest-leverage sentence you can add to almost any prompt is: "Format your response as [format]."
It sounds trivial. It isn't. Without explicit format instructions, every model has its own defaults — verbose explanatory paragraphs, excessive bullet points, unnecessary headers, markdown in contexts where you need plain text. Specifying the format up front eliminates 80% of post-processing.
Formats worth knowing:
JSON — for structured data you'll process programmatically:
Return a JSON object with keys: "title" (string), "tags" (array of strings), "difficulty" (one of: beginner | intermediate | advanced).
Markdown table — for comparisons:
Output a markdown table with columns: Tool, Price, Best for, Limitation.
Numbered steps — for processes:
Format as numbered steps. Each step: one sentence action, one sentence why it matters.
Bullet list with constraints — for summaries:
5 bullet points max. Each bullet under 20 words. No filler, no "this means that."
Plain text, no markdown — often overlooked:
Respond in plain text. No bullet points, no headers, no bold. Just paragraphs.
The last one matters more than people realize. If you're pasting output into a tool that doesn't render markdown, or into a voice interface, you want clean text. Specify it or you'll spend time stripping asterisks.
Technique 4: Chain-of-thought reasoning
For anything involving reasoning — analysis, decisions, debugging, math, comparisons — add "think through this step by step" or "reason through this before giving your answer." This is chain-of-thought prompting in its simplest form.
Why it works: when you ask a model to jump straight to an answer, it commits to a token path early and has to stay consistent with that choice. When you ask it to reason first, it generates intermediate steps that constrain the final answer toward logical consistency.
The difference is measurable. On multi-step reasoning tasks, chain-of-thought prompting improves accuracy by 20–40% depending on model and task complexity. On simpler tasks, it doesn't hurt — it just adds a few tokens.
Practical application:
Without CoT:
"Should I use PostgreSQL or MongoDB for this use case: [description]"
With CoT:
"Think through the trade-offs between PostgreSQL and MongoDB for this specific use case: [description]. Consider: data structure, query patterns, scaling needs, team familiarity. Reason through each factor, then give me a recommendation with your reasoning."
The CoT version forces the model to surface its reasoning, which you can then evaluate and push back on if it makes a bad assumption. With the direct version, you just get an answer with no visibility into why.
One variation: ask it to "think out loud and then give a final recommendation in bold." This separates the reasoning trace from the answer, making it easy to skip the trace if you just want the recommendation.
Technique 5: Constraint injection
Most prompts tell the model what TO do. The best prompts also tell it what NOT to do.
Constraint injection means explicitly stating the limits, exclusions, and anti-patterns you want the model to avoid. This sounds defensive — and it is. You're anticipating failure modes and closing them off.
Common constraints worth adding:
- Length: "No longer than 200 words." "Exactly 3 paragraphs." "Under 280 characters."
- Tone exclusions: "Do not use corporate language. No words like 'leverage,' 'synergy,' 'streamline.'"
- Format exclusions: "Do not use bullet points. Write in prose."
- Content exclusions: "Do not include pricing information. Do not mention competitors."
- Structural rules: "Lead with the most important point. Do not bury the recommendation."
The tone exclusion is one I use constantly. AI-generated text drifts toward marketing speak — "harness the power of," "unlock your potential," "seamlessly integrate." Explicitly banning these words produces tighter output.
A combined example:
You are a technical writer for a developer audience. Write a 3-paragraph explanation of webhooks.
Constraints:
- No analogies. Developers don't need you to compare webhooks to a doorbell.
- No passive voice.
- Lead with what a webhook does, not what it is.
- Maximum 250 words total.
That last constraint list makes a real difference. Without it, you'll get a doorbell analogy in paragraph one, every time.
How to stack all five into a single master prompt
Here's all five techniques combined into a single prompt template:
You are a [ROLE] with expertise in [DOMAIN]. ← Technique 1: Role framing
[2–3 examples of input/output] ← Technique 2: Few-shot examples
Task: [describe the actual task]
Think through the key considerations before writing, then produce your output. ← Technique 4: Chain-of-thought
Format: [specific format — JSON / markdown table / numbered steps / plain prose] ← Technique 3: Format spec
Constraints: ← Technique 5: Constraint injection
- [What NOT to do]
- [Length limit]
- [Tone rule]
Filled in for a real use case — writing a cold email:
You are a B2B sales copywriter who specializes in outbound email for developer tools.
Example 1:
Input: "Email to a CTO about our API monitoring product"
Output: Subject: Your API is down. We saw it. | Body: 3 sentences max, no pitch, end with a question.
Example 2:
Input: "Email to a VP Eng about deployment tooling"
Output: Subject: How [Company] reduced deploy time by 40% | Body: Lead with outcome, one social proof line, one CTA.
Task: Write a cold email to a VP of Engineering at a 200-person SaaS company about our database performance monitoring tool. We know they're scaling from 50k to 500k users this year.
Think through what this VP cares about most given that growth stage, then write the email.
Format: Subject line on line 1, then the email body. No headers or labels.
Constraints:
- Under 120 words total
- No "I hope this email finds you well" or any version of that opener
- No product feature list — focus on outcomes
- End with a specific question, not "let me know if you're interested"
That's a real prompt I'd actually send to Claude or GPT-4o. It uses all five techniques and produces output that's consistently usable with minimal editing.
The one thing that makes these techniques work
None of these techniques are magic in isolation. What makes them work is that they eliminate ambiguity. Every time the model has to guess what you want, you risk getting something you don't. These five techniques systematically reduce the number of guesses the model has to make.
Role framing tells it who. Few-shot examples tell it how. Format specification tells it shape. Chain-of-thought tells it how to think. Constraint injection tells it what not to do.
When you've specified all five, there's very little room for the model to go wrong — and when it does, you know exactly which component failed and can adjust it.
Start with technique 3 (format specification) if you only add one thing to your prompts today. It produces the fastest visible improvement with the least effort. Then layer in the rest as you develop a feel for where your prompts are falling short.
For deeper coverage of each technique, the intermediate track has dedicated lessons on few-shot prompting, chain-of-thought, and system prompts. Start there when you're ready to go beyond the basics.



