Here's something most people using ChatGPT or Claude don't know: every conversation already has a system prompt. You just didn't write it.
OpenAI, Anthropic, Gemini — they all configure their models with instructions before you say a single word. Those instructions shape how the model behaves, what it refuses, how it formats responses, and what kind of "personality" it has. The model you interact with isn't raw GPT-4. It's GPT-4 with a system prompt telling it how to act.
Once I understood that, everything clicked. You can do the same thing.
What Is a System Prompt, Exactly?
A system prompt is a set of instructions that gets injected before any conversation starts. The model reads it first, then reads your message, then responds.
Think of it like briefing a contractor before they start work. You don't want to explain your standards, preferences, and requirements every single time you have a meeting. You explain them once, upfront, and they carry that context into everything they do.
That's the system prompt.
In tools like ChatGPT's Custom Instructions, Claude Projects, or the API, you can write your own. And honestly, once you start using them, you'll wonder how you ever lived without them.
Why Bother?
Because without a system prompt, you're starting from zero every time.
Every new conversation, you have to re-explain:
- Who you are
- What tone you want
- What format works for you
- What the AI should and shouldn't assume
It's exhausting. And most people just... don't do it. So their prompts are bloated with context that should have been set once, or they get generic responses because the model has no idea who it's talking to.
A good system prompt fixes all of that. It's setup cost you pay once, paid back every single session.
The Anatomy of a Good System Prompt
Not all system prompts are equal. I've seen some that are three words. I've seen some that are 3,000 words and try to specify every possible scenario. Both extremes are usually wrong.
Here's the structure I've landed on after a lot of trial and error:
1. Identity: Who is this AI right now?
Don't just say "you're a helpful assistant." That's the default — you're not adding anything.
Instead, give it a specific role relevant to your actual use case:
You are a senior product manager at a B2B SaaS company. You help me think through product decisions, write PRDs, and stress-test ideas.
Notice what this does: it sets expertise level, domain, and purpose. The model now has context that shapes every single response.
2. Audience and purpose
Who are you, and what are you trying to do?
I'm a solo founder building a project management tool for design agencies. Most of my questions will be about product strategy, positioning, or writing tasks.
This lets the model tailor advice to your actual situation rather than giving you generic best-practices content.
3. Tone and style rules
This is where most people don't go deep enough. "Be concise" and "be professional" are vague enough to be useless.
Be specific about what you actually want:
Tone: Direct and conversational. I don't need you to preface answers with "Great question!" or "Certainly!" Just answer. Use plain language. Avoid jargon unless it's industry-standard and necessary.
Format: Default to short paragraphs. Use bullet points only when genuinely listing things — not just to make answers look structured. If I ask for a document, format it properly. Otherwise, talk to me like a person.
4. Hard rules
Things the model should always or never do:
Always:
- Push back if you think I'm wrong. Don't just agree with me.
- If I ask for an opinion, give me one with reasoning, not "it depends."
- Flag assumptions you're making.
Never:
- Give me a canned disclaimer at the end of every response.
- Pad answers with filler. If the answer is short, be short.
- Use the phrase "I'd be happy to help with that."
A Real Example: My Writing Assistant System Prompt
Here's an actual system prompt I use for a writing assistant persona:
You are a sharp editorial assistant for a tech-focused content creator. Your job is to help me write, edit, and improve written content.
About my content:
- I write for a technical but non-developer audience (founders, PMs, operators)
- My style is conversational but substantive — no fluff, no filler
- I value specificity over vague generalities
- I use short sentences and paragraphs intentionally
- My tone is direct, sometimes a little wry, never corporate
Your role:
- When editing: preserve my voice. Don't make it sound "correct" if it means losing personality.
- When writing: match my style. If you're unsure, ask for a sample.
- When I share a rough draft: tell me what's working first, then what's not.
- If my idea is weak, say so and explain why.
Format defaults:
- Short responses for quick questions
- Full documents when I ask for them, properly formatted in Markdown
- Headers and bullets only when the content genuinely calls for it
Do not start responses with affirmations or filler phrases.
This took me maybe 20 minutes to write. It saves me 5 minutes per session minimum, and the quality of output is consistently better than what I'd get with zero context.
Where to Actually Put System Prompts
ChatGPT:
Go to Settings → Personalization → Custom Instructions. There are two fields — "What would you like ChatGPT to know about you?" and "How would you like ChatGPT to respond?" Both feed into the system prompt.
ChatGPT also has GPTs — custom versions with full system prompts you can configure.
Claude:
In claude.ai, you can create Projects, which have a "Project instructions" section. This is your system prompt. Everything in that project uses it.
Through the API, you pass it directly as the system parameter.
Gemini:
Gems (in Gemini Advanced) let you customize behavior similarly. You define the persona, instructions, and knowledge sources.
API access (any provider):
If you're using the API directly, you pass a system parameter before the user message. This is the most powerful and flexible way — no UI limitations on length or complexity.
Common Mistakes
Trying to cover every scenario: Your system prompt isn't a legal contract. Don't try to anticipate every situation. Cover the 80% of cases you actually care about, and handle edge cases in the conversation.
Being too vague: "Be helpful and concise" is not a system prompt. It's a wish. Give specific examples when possible.
Setting a role the model won't actually play: Asking the model to "have no restrictions" or "pretend to be a different AI" doesn't work the way people think. Use system prompts for legitimate customization, not jailbreaks.
Not updating it: Your needs change. Review and revise your system prompt periodically. What worked three months ago might not fit how you're using AI today.
One More Thing
The best system prompts feel invisible. When they're working right, you don't think about them — you just notice that the AI gets you, it formats things the way you like, and you're not constantly correcting it.
That's the goal. Not a clever prompt that you show off to people. A quiet, effective one that makes every session better.
Start simple. Write one. Use it for a week. Improve it. You'll notice the difference.
If you want to go deeper on how to structure instructions for maximum reliability, the Intermediate Track covers XML tags, constraints, and delimiters that make complex system prompts much more predictable.
