When you look at prompts that consistently produce great results, you'll notice they tend to include the same set of components. Learning to identify and use these four elements makes prompt writing much more systematic — and much less trial and error.
The Four Elements
1. Instruction
The instruction tells the model what to do. It's the core of any prompt.
Summarize the following article.
Translate this paragraph into Spanish.
Write a Python function that...
Review the code below and identify bugs.
An instruction without other elements works for simple tasks. For anything complex, you'll need the other three.
Common instruction mistakes:
- Too vague: "Help me with this document" (help how?)
- Too generic: "Write something good" (good how?)
- Missing action verb: "Customer feedback analysis" (should I summarize? rate? categorize?)
2. Context
Context provides background information that shapes how the model interprets and responds to the task.
You are a senior software engineer reviewing code for a fintech startup.
This article is from a medical journal and uses technical terminology.
The audience is non-technical executives who make budget decisions.
This is a rough first draft that needs significant restructuring.
Context answers questions the model would otherwise have to guess: Who is the audience? What's the goal? What constraints apply? What's the tone?
When to include context:
- When the audience or purpose would change the response
- When domain expertise is relevant
- When there are implicit constraints (length, formality, technical level)
- When the role or persona matters
3. Input Data
Input data is the content the model should actually work with — the raw material for the task.
Article to summarize:
[paste article here]
Code to review:
[paste code here]
Customer review to analyze:
[paste review here]
Input data is different from context: context shapes the approach, input data is what gets processed.
Tips for input data:
- Use clear delimiters to separate it from the instruction and context (e.g., triple backticks, XML tags, or a labeled header)
- Be explicit about where it starts and ends
- If it's long, mention its length or format upfront
4. Output Format
Output format tells the model how to structure its response.
Respond in bullet points.
Return a JSON object with keys: "summary", "sentiment", "score".
Write 3 paragraphs, each starting with a topic sentence.
Answer in one sentence, under 25 words.
Use a table with columns: Feature | Pros | Cons.
Without this element, the model decides the format itself — which may not be what you need, especially for programmatic use or documents with strict style requirements.
When output format matters most:
- Programmatic parsing (you need valid JSON, CSV, specific structure)
- Consistent document templates
- Length constraints (brief executive summary vs. detailed analysis)
- Downstream formatting (you're embedding the output in a specific UI)
Combining the Elements
Here's how the four elements come together in practice:
Simple task (just instruction + input):
Translate this sentence to French:
"The product will ship within 3 business days."
Medium task (instruction + context + input):
You are a professional editor. The author's voice is direct and uses short sentences.
Improve the clarity of this paragraph without changing the author's style:
[paragraph]
Full task (all four elements):
[Instruction]
Analyze this customer review and categorize the feedback.
[Context]
We're a SaaS company. Our product has three main areas: onboarding, core features, and support.
The audience for this analysis is the product team.
[Input data]
Review: "Setting up was confusing and took way longer than expected. But once I got it working,
the features are genuinely useful. Support was very responsive when I had questions."
[Output format]
Return a JSON object:
{
"categories": [{"area": "onboarding|features|support", "sentiment": "positive|negative|neutral", "quote": "key phrase"}],
"overall_sentiment": "positive|negative|mixed",
"priority_for_product_team": "string"
}
A Practical Framework
When writing a new prompt, ask yourself:
- Instruction: What specifically do I want the model to do? (Action verb + target)
- Context: What background would change how the model approaches this?
- Input: What content should it work with? How should I delimit it?
- Output: What format, length, or structure do I need?
For simple tasks, start with just the instruction and see if the result is good. Add the other elements selectively when you need to constrain or shape the output.
The Elements Aren't Always Separate
In practice, you don't need to label these elements or structure them in any particular order. The model doesn't care about headers like [Context] and [Input]. What matters is that the information is present and clear.
You're helping a high school student understand World War I.
Explain why the war started. Use simple language and one analogy.
Keep the explanation under 150 words.
This has context (high school student, simple language), instruction (explain why the war started), and output format (analogy, under 150 words) — all in three natural sentences, no labels needed.
The framework is a mental checklist, not a rigid template.