A structured curriculum built from official OpenAI, Anthropic & Gemini documentation. 69 lessons across 6 tracks — beginner to AI agents, safety, and Claude Code.
Start here if you're new to AI or prompting. No prior experience needed.
What is a Prompt? Your First Step into AI
Understand what a prompt is, how AI models process them, and why the words you choose matter more than you think.
Clarity & Specificity: The #1 Prompting Skill
Learn why clarity is the single most impactful skill in prompt engineering and how to be specific in ways that dramatically improve your AI outputs.
Assigning Roles & Personas to AI Models
Learn how to use role assignment to prime AI models with domain expertise and improve the relevance, tone, and accuracy of their outputs.
Formatting Output: Control How AI Responds
Learn how to explicitly control the structure, length, and format of AI responses — so you get exactly what you need, every time.
How LLMs Work: What Every Prompter Should Know
A practical, non-technical explanation of how large language models work — and why this understanding makes you a dramatically better prompt engineer.
Giving AI the Context It Needs
AI doesn't know who you are, what you do, or what you're trying to accomplish. Learn what context to provide — and how to provide it — so you stop getting generic answers.
How to Iterate and Refine Your Prompts
One prompt rarely gets you where you want to go. The best results come from treating prompting as a conversation — refining, redirecting, and building on each response.
10 Common Prompting Mistakes (And the Fixes)
These are the patterns that produce bad AI output most of the time. Learn to spot them in your own prompts, fix them, and stop making them by default.
LLM Settings: Temperature, Top-P, Max Tokens, and More
Understanding temperature, top-p, max tokens, and stop sequences lets you control exactly how an AI model responds. Here's what each setting does and when to change it.
Prompt Elements: The Four Building Blocks of Any Good Prompt
Every effective prompt is built from four elements: instruction, context, input data, and output format. Learn what each does and how to combine them for consistently better results.
Learn the techniques used by professionals — few-shot, CoT, XML structure.
Few-Shot Prompting: Teaching AI by Example
Learn how to use few-shot prompting to dramatically improve AI output quality by showing the model exactly what you want through examples.
XML Tags & Delimiters: Structure Your Prompts Like a Pro
Learn how to use XML tags and delimiters to clearly separate instructions from data in your prompts — a technique that dramatically reduces errors on complex tasks.
Chain of Thought Prompting: Make AI Reason Step by Step
Chain of Thought (CoT) prompting forces AI to show its reasoning before answering — dramatically improving accuracy on logic, math, analysis, and multi-step tasks.
Avoiding Hallucinations: Keep AI Grounded in Facts
Learn what causes AI hallucinations and the specific prompting techniques that dramatically reduce fabricated facts, fake citations, and confidently wrong answers.
Constrained Generation: Force Structured Output
Learn how to make AI models reliably output JSON, XML, CSV, and other structured formats — essential for integrating AI into real applications and workflows.
System Prompts: Giving AI Standing Instructions
System prompts let you set persistent rules, persona, and context that apply to every message in a conversation. Learn how to write them effectively and when they change everything.
Prompting With Long Documents and Large Context
Pasting a 50-page document and asking 'what do you think?' rarely works. Learn how to structure prompts for long-form content, extract what matters, and work around context limits.
Multimodal Prompting: Images, Files, and Mixed Content
Modern AI models can see, read files, and process multiple input types at once. Learn how to structure prompts that work with images, documents, data files, and mixed content effectively.
Retrieval Augmented Generation (RAG): Ground Your AI in Real Data
RAG connects an LLM to an external knowledge base so it answers from facts rather than memory. Learn how RAG works, when to use it, and how to prompt effectively in RAG systems.
Self-Consistency: Get Better Answers by Sampling Multiple Reasoning Paths
Self-consistency generates multiple chain-of-thought responses and takes the majority vote. Learn how it dramatically improves accuracy on reasoning tasks and when to use it.
Generate Knowledge Prompting: Let the Model Teach Itself Before Answering
Generate Knowledge Prompting has the LLM produce relevant facts and context before answering a question — dramatically improving accuracy by giving the model a better foundation to reason from.
Reflexion: Teach AI to Learn from Its Own Mistakes
Reflexion is a technique where an LLM evaluates its own output, identifies what went wrong, and generates an improved response — a powerful self-correction loop for complex tasks.
Prompt Testing and Evaluation
Learn how to test prompts systematically — building golden sets, running regression tests, and measuring prompt quality before deploying to production.
Structured prompting with JSON schemas
Getting LLMs to output valid, structured JSON reliably — using JSON Schema as a contract, constrained generation modes, and error-handling patterns.
Working with Vision Models
Learn how to prompt multimodal AI models effectively — analyzing images, charts, screenshots, documents, and diagrams with Claude, GPT-4o, and Gemini.
Prompt chaining, evaluation frameworks, tree of thought, and expert patterns.
Prompt Chaining: Build Multi-Step AI Workflows
Learn how to break complex tasks into a sequence of focused prompts where each output feeds the next — unlocking tasks that a single prompt can't reliably handle.
Prompt Evaluation: Test and Improve Prompts Scientifically
Move beyond 'this looks good' — learn how to build evaluation frameworks that measure prompt performance with real metrics, A/B testing, and golden datasets.
Tree of Thought: Multi-Path Reasoning for Complex Problems
Tree of Thought prompting extends Chain of Thought by exploring multiple reasoning paths simultaneously — dramatically improving performance on complex planning, creative, and decision-making tasks.
Meta-Prompting: Using AI to Write Better Prompts
One of the most powerful techniques at the advanced level is turning AI on itself — using it to generate, critique, and optimize your prompts. Here's how meta-prompting works and when to use it.
Adversarial Prompting and Red-Teaming Your AI Systems
If you're building anything with AI — a chatbot, a workflow, an automated system — you need to know how it fails under adversarial conditions. Here's how to think about it and what to do about it.
Fine-Tuning vs Prompting: When to Use Which
Prompt engineering and fine-tuning are both tools for getting AI to behave a specific way. Understanding when each makes sense — and the real trade-offs — helps you avoid expensive mistakes.
Agentic Prompting: Designing Prompts for AI Agents
AI agents don't just answer questions — they plan, use tools, and take multi-step actions. Learn how to design prompts that make autonomous AI systems reliable, safe, and effective.
Prompt Compression & Token Efficiency
Shorter prompts cost less, run faster, and often produce better results. Learn how to reduce token usage without sacrificing output quality — and how to measure when compression is hurting you.
ReAct Prompting: Reasoning + Acting in a Loop
ReAct interleaves reasoning (Thought) and action (Act) steps so an AI agent can plan, use tools, and adjust its approach based on real-world feedback — all within a single prompt loop.
Automatic Prompt Engineer (APE): Let AI Optimize Your Prompts
Automatic Prompt Engineer uses an LLM to generate and evaluate candidate prompts, then selects the highest-performing version — turning prompt optimization into an automated search problem.
Program-Aided Language Models (PAL): Offload Computation to Code
PAL has an LLM write code to solve problems instead of computing answers directly — eliminating arithmetic errors and enabling complex calculations that pure language models consistently get wrong.
Context Engineering: The 2025 Evolution of Prompt Engineering
Context engineering is the art of designing what goes into an LLM's context window — beyond just the prompt. Learn how to structure memory, tools, retrieved data, and conversation history to build reliable AI systems.
Prompt Versioning and Management at Scale
Prompts are code. Learn how to version, test, and deploy prompt changes with the same rigor you'd apply to a software release — registries, A/B testing, and regression testing.
Build autonomous AI systems with tools, memory, ReAct loops, and multi-agent patterns.
What is an AI Agent?
Understand what separates an AI agent from a regular prompt. Learn how agents perceive, reason, act, and loop — and why this architecture unlocks a completely new class of AI applications.
Agent Components: Memory, Tools, Planning, and Perception
Break down the anatomy of an AI agent. Every agent — no matter how complex — is built from four components: memory, tools, a planning mechanism, and perception. Learn what each does and how they interact.
Function Calling: Giving LLMs Tools
Function calling is the technical mechanism that lets an LLM invoke external tools. Learn how to define tools, how models decide when to call them, and how to structure results so agents act reliably.
ReAct Prompting: Reason Before You Act
ReAct is the reasoning pattern that makes agents dramatically more reliable. By explicitly writing out thoughts before every action, the model plans better, catches errors earlier, and produces work you can follow and debug.
AI Workflows vs. AI Agents: Choosing the Right Architecture
Not every AI task needs an agent. Learn the difference between deterministic workflows and autonomous agents, when to use each, and how to avoid over-engineering with agents when a simpler pipeline would be more reliable.
Context Engineering for Agents
Context engineering is the discipline of deciding what information goes into an agent's context window, in what form, and when. It's the highest-leverage skill for building reliable agents at scale.
Multi-Agent Systems: Coordinating Multiple AI Agents
Single agents hit limits on complex tasks. Multi-agent systems split work across specialized agents, run tasks in parallel, and use orchestrators to coordinate. Learn the key patterns and when to use them.
Evaluating AI Agents: How to Know If Your Agent Works
Building an agent is only half the job. Learn how to measure agent performance, design test cases, catch failure modes before they reach production, and build evaluation systems that scale.
Agent Memory & State Management
Learn how AI agents store, retrieve, and manage information across sessions — from simple conversation history to persistent vector memory.
Agent Observability & Debugging
Why agents are hard to debug and how to fix it — tracing execution, identifying the 5 most common failure modes, and building the monitoring you need before you launch.
Tool Design for AI Agents
Tools are how agents interact with the world. Learn the principles of designing tools that agents use correctly — with schema best practices, taxonomy, and common mistakes.
Building Production-Ready Agents
The gap between a demo agent and a production agent is bigger than it looks. Learn the reliability patterns, security practices, and testing strategies that make agents safe to ship.
Prompt injection, jailbreaking, hallucinations, biases, and red-teaming. Build responsibly.
Prompt Injection: The Most Common AI Security Attack
Prompt injection tricks an AI into ignoring its instructions and following malicious commands embedded in user input or external data. Learn how it works and how to defend against it.
Prompt Leaking: Protecting Your System Prompts
Prompt leaking is when an AI is tricked into revealing its confidential system prompt. Learn why system prompts are hard to fully protect, what you can do, and what you should never put in one.
Jailbreaking: Techniques, Examples, and Defenses
Jailbreaking bypasses an AI's built-in safety guidelines through creative prompting. Learn the main jailbreak techniques, why they work, and how to make your AI systems more resistant to them.
Hallucinations Deep Dive: Why AI Confidently Gets Things Wrong
LLMs hallucinate — generating plausible-sounding but false information. Learn why hallucinations happen, which types of content are highest-risk, and practical techniques to minimize them.
Biases in LLM Outputs: What They Are and How to Reduce Them
LLMs inherit biases from training data, reinforcement feedback, and their own architecture. Learn the main bias types, how they surface in practice, and prompt strategies to reduce their impact.
Red-Teaming Your Prompts: Stress Test Before You Ship
Red-teaming is the practice of systematically attacking your own AI system to find vulnerabilities before real users do. Learn a practical red-teaming methodology for LLM applications.
AI Bias Mitigation Prompts
Practical techniques to identify, test for, and reduce bias in LLM outputs — with prompt patterns that produce fairer, more consistent results.
Responsible AI Agent Design
How to design AI agents that fail safely — with principles and patterns for scope limitation, human oversight, graceful degradation, and audit trails.
Master Claude Code CLI — CLAUDE.md context, custom slash commands, hooks, MCP servers, GitHub integration, and multi-agent workflows.
What is Claude Code?
Understand what Claude Code is, how it differs from Claude.ai, and what you can build with it.
Project Memory: CLAUDE.md & Context Files
Learn how to give Claude Code persistent memory about your project using CLAUDE.md and context files.
Custom Slash Commands
Create reusable slash commands to automate your most common workflows in Claude Code.
Skills & Reusable Workflows
Build multi-step skills that orchestrate complex sequences of actions across your codebase.
Hooks: Automating Actions Around Tool Use
Use hooks to automatically run scripts before or after Claude Code's tool calls — formatting, logging, notifications, and more.
MCP Servers: Extending Claude Code
Connect Claude Code to external services, databases, and APIs using Model Context Protocol servers.
GitHub Integration
Supercharge your GitHub workflow — PR reviews, issue management, and CI/CD integration using Claude Code with the GitHub MCP server.
Multi-Agent & Parallel Tasks
Spawn subagents and run tasks in parallel to dramatically speed up complex work across large codebases.
Settings, Permissions & Security
Understand Claude Code's permission system and configure it safely for personal projects, teams, and CI/CD pipelines.
Real-World Workflows: Putting It All Together
See how CLAUDE.md, hooks, MCP servers, and slash commands combine into a complete development workflow — from spec to merged PR.
Debugging with Claude Code
A systematic workflow for using Claude Code to find, diagnose, and fix bugs — from error messages to root cause analysis to verified fixes.