AI agents are a different beast from chat completions. A standard prompt gets one response. An agent prompt gets used repeatedly — to plan, to call tools, to evaluate results, to decide what to do next. A flaw that causes a mild annoyance in a chat session can cause an infinite loop or incorrect actions in an agentic system.
This lesson covers the principles for designing prompts that work in agentic contexts.
What Makes Agentic Prompting Different
In a standard conversation, you write a prompt, get a response, and evaluate it yourself. Mistakes are immediately visible and easy to recover from.
In an agentic system:
- The model takes sequential actions based on previous results
- It calls external tools (search, code execution, APIs, file systems)
- Errors compound — a bad step early in a chain causes cascading failures
- The model operates with partial information that changes as it acts
The prompt doesn't just need to produce good output once. It needs to produce reliable, parseable output across many executions under varying conditions.
The Core Agent Prompt Structure
Most effective agent system prompts share a common structure:
You are [role/identity].
## Objective
[What the agent is trying to accomplish, stated precisely]
## Tools Available
[List of tools with clear descriptions of what each does and when to use it]
## Working Process
[How the agent should approach problems — think before acting, verify before concluding]
## Output Format
[Exactly what the agent should output at each step]
## Constraints
[What the agent must not do, when to stop and ask for help]
The most important section is Constraints. Agents without clear stopping conditions tend to continue past the point where they should pause, loop when they hit errors, or take actions beyond their intended scope.
Designing for Tool Use
When the agent has access to tools, the system prompt needs to specify:
What each tool does — not just its name but its behavior, limitations, and cost (token, API calls, time).
When to use each tool — agents given multiple tools need guidance on prioritization.
What to do when a tool fails — explicitly state retry limits and fallback behavior.
## Tools
search(query: string) → Returns top 5 search results with titles and snippets.
Use for: finding current information, verifying facts, exploring topics.
Limit: 3 calls per task. If results are insufficient after 3 searches, state what you found and what remains unclear.
read_file(path: string) → Returns file contents as a string.
Use for: inspecting files the user has provided.
Do not use for: files outside the provided workspace.
write_file(path: string, content: string) → Writes content to a file.
Use for: saving your final output.
Always confirm the path before writing. Never overwrite without checking.
Planning Before Acting
One of the most effective patterns for agentic prompts is requiring explicit planning before execution:
Before taking any action:
1. Restate the task in your own words to confirm understanding
2. List the steps you plan to take
3. Identify any information you are missing
4. Identify any risks or irreversible steps
Only begin executing after this planning phase.
This accomplishes two things:
- Forces the model to catch misunderstandings before they turn into wrong actions
- Makes the agent's reasoning visible and auditable
In practice, this planning step often surfaces gaps that would have caused failures mid-execution.
Verification and Self-Checking
Agents benefit from explicit verification steps built into the prompt:
After completing each major step:
- Verify the output matches what you expected
- Check for errors or unexpected results before proceeding
- If something looks wrong, diagnose before continuing — do not assume it will resolve itself
For high-stakes operations (writing files, making API calls, modifying data), add explicit confirmation requirements:
Before any write operation, output a summary of:
- What you are about to write
- Where you are writing it
- Why this action is correct
Then proceed with the write.
Handling Uncertainty and Dead Ends
Agents that don't know how to handle uncertainty tend to hallucinate their way through it. Explicit guidance prevents this:
When you are uncertain:
- State what you know, what you don't know, and what you would need to know to proceed
- Do not guess or fabricate information to fill gaps
- Ask for clarification rather than proceeding on assumptions
When you reach a dead end:
- Stop and explain what you attempted and why it didn't work
- Suggest what additional information or tools would be needed
- Do not loop — if you have tried the same approach twice with the same failure, stop and report
The "do not loop" instruction is critical. Without it, models will often retry failed operations indefinitely, consuming resources without progress.
Scoping the Agent's Authority
One of the most important aspects of agentic prompts is explicitly defining what the agent is and isn't allowed to do:
You may:
- Read files in the /workspace directory
- Call the search tool to find information
- Write summaries and reports
You may not:
- Delete or modify existing files
- Make external API calls not listed in the tools section
- Access system information or environment variables
- Execute code
If asked to do something outside these boundaries, explain the limitation and suggest what a human could do instead.
This principle of least authority — give the agent exactly the permissions it needs and no more — reduces the blast radius of errors.
Structured Output from Agents
Agents that produce structured, predictable output are easier to build reliable systems around. Define the output format explicitly and use XML tags or JSON to enforce it:
For each action, output exactly:
<action>
<thought>Your reasoning for this action</thought>
<tool>Tool name (or "none" if no tool needed)</tool>
<input>Tool input or response if no tool</input>
</action>
After the final action, output:
<result>
<summary>What you accomplished</summary>
<output>The deliverable (file path, answer, etc.)</output>
<issues>Any problems encountered or limitations of the result</issues>
</result>
Structured output makes it easy to parse agent behavior, log it, and debug it. Unstructured output from agents makes it hard to integrate their work into larger systems.
Common Agentic Prompting Mistakes
No explicit stopping conditions — The agent doesn't know when it's done and continues past the goal.
Vague tool descriptions — The model misuses tools or calls them in wrong situations.
No handling for tool failures — A single API error derails the entire task.
No authority boundaries — The agent takes broader actions than intended.
Missing self-check steps — Errors propagate through the chain undetected.
Asking for too much in one step — Complex multi-step tasks work better when decomposed into smaller, verifiable sub-tasks.
Key Takeaways
- Agentic prompts need explicit structure: role, objective, tools, process, output format, constraints
- Plan before acting — require explicit planning steps before execution begins
- Define tool use precisely: what each tool does, when to use it, what to do on failure
- Set clear authority boundaries — agents should know exactly what they're allowed to do
- Build in verification steps to catch errors before they compound
- Specify what to do when uncertain or stuck — agents should stop and report, not guess and continue
Next: techniques for measuring and improving the efficiency of your prompts. Prompt Compression & Token Efficiency →