Prompt engineering has matured enough that there's now too much to learn from and not enough time to sort the good from the filler. This is a curated list — things I'd actually recommend to someone starting out or levelling up in 2026, not an exhaustive directory of everything that exists.
Organised by type: structured learning, reference guides, communities, tools, and people worth following.
Structured learning
MasterPrompting.net (this site) Five tracks from beginner to advanced, covering every major technique from basic clarity through chain-of-thought, meta-prompting, agent design, and AI safety. The Agents track and Advanced track are particularly strong. Free.
Anthropic's Prompt Engineering docs Anthropic publishes detailed guides on how Claude specifically works — what structures it responds to, how to use XML tags effectively, system prompt patterns, and tool use. If you're working primarily with Claude, this is essential reading. Updated regularly as models improve.
OpenAI's Prompt Engineering guide Covers GPT-specific patterns — few-shot examples, structured outputs, JSON mode, function calling. More technical than Anthropic's docs and assumes some comfort with APIs. Useful even if you're not an OpenAI customer because the principles transfer.
DeepLearning.AI short courses Andrew Ng's platform has several practical prompt engineering courses that take 1–2 hours each. The ChatGPT Prompt Engineering for Developers course (free) is a good technical foundation. Some are co-produced with Anthropic and OpenAI, so the content is accurate.
Learn Prompting (learnprompting.org) Open-source, community-maintained, and one of the oldest resources in the space. Strong on red-teaming, adversarial prompting, and safety — areas most courses skip. Cited by Google, Microsoft, and O'Reilly. Worth bookmarking for the safety and security sections specifically.
Reference guides
Prompt Engineering Guide (promptingguide.ai) The most comprehensive free reference on techniques — chain-of-thought, tree-of-thought, ReAct, RAG, self-consistency, and more. Dry reading but thorough. Use it as a reference rather than a course. The research paper citations are useful if you want to go deeper on any technique.
Anthropic's Model Card and system prompt documentation Less famous than their blog posts but more useful in practice. The detailed guidelines on what Claude will and won't do, and how different inputs affect its behaviour, are directly applicable when debugging prompts that aren't working.
Google's Prompting Essentials Google's official resource for Gemini prompting. Well structured and practical, especially for multimodal use cases (image + text prompts, which Gemini handles natively).
Communities
r/PromptEngineering The largest prompt engineering community on Reddit. Quality varies, but the top posts are often genuinely useful — people sharing techniques that work, debugging sessions, and comparisons across models. Good for finding prompts you wouldn't have thought to try yourself.
Latent Space (Discord + podcast) Primarily aimed at AI engineers and researchers but the signal quality is high. If you're building with AI rather than just using it, this community is worth being part of.
Twitter/X — AI practitioner accounts A lot of the most current prompting knowledge lives on X, shared by people actively building with models. Follow practitioners over commentators — people who are running experiments and sharing results, not just discussing news.
Tools
The MasterPrompting Prompt Library 62+ ready-to-use prompts across writing, coding, research, data, marketing, and more. Each has a copy button, difficulty rating, and tips. Good for getting unstuck or finding a starting point.
PromptBase Marketplace for buying and selling prompts. More useful for image prompts (Midjourney, DALL-E) than text prompts, but browsing it gives you a sense of how people structure prompts for specific tasks.
LangSmith / Braintrust / Promptfoo Prompt evaluation and testing tools. If you're engineering prompts systematically — running evals, A/B testing, tracking regression — these are the tools people are using in 2026. Not for beginners, but worth knowing about once you're past the basics.
Cursor / Claude Code / Windsurf If code is part of your work, these AI coding tools apply prompt engineering in a specific context. Writing a good CLAUDE.md for your project is itself a prompt engineering exercise.
Papers worth reading (non-academic summary)
You don't need to read AI research papers to be a good prompt engineer. But these have had outsized influence on the techniques everyone uses:
"Chain-of-Thought Prompting Elicits Reasoning in Large Language Models" (Wei et al., 2022) The paper that established that asking the model to reason step-by-step dramatically improves accuracy. Everything in the chain-of-thought space traces back here.
"ReAct: Synergizing Reasoning and Acting in Language Models" (Yao et al., 2022) The foundation of how AI agents work — the reason-act-observe loop. If you're building or using agents, understanding ReAct is useful. The ReAct lesson covers the practical application.
"Large Language Models are Zero-Shot Reasoners" (Kojima et al., 2022) Showed that simply adding "Let's think step by step" to a prompt significantly improves performance on reasoning tasks. Counterintuitively simple and widely applicable.
"Self-Consistency Improves Chain of Thought Reasoning" (Wang et al., 2022) Showed that generating multiple reasoning paths and taking the majority answer outperforms single-pass chain-of-thought. The basis for self-consistency prompting.
People worth following
Rather than listing specific handles that may change, here's who to look for:
- Model researchers at Anthropic, OpenAI, Google DeepMind — they publish findings and practical guidance directly
- Practitioners building production AI systems — not people writing about AI, people who are deploying it and writing about what breaks
- Red teamers and safety researchers — they find the edges and failure modes that improve everyone's prompting practice
- Developers working with specific frameworks — LangGraph, n8n, CrewAI — their real-world experience translates to better prompting knowledge
The signal-to-noise ratio in the AI content space is low. Prioritise people who share experiments and results over people who share opinions and predictions.
What's worth skipping
- $1000+ prompt engineering courses — The knowledge in them is available free and more up to date. The market moved faster than the curricula.
- "Jailbreak" focused content — Interesting academically, mostly irrelevant for building useful things.
- Prompt collections with no context — Lists of prompts without explanation of why they work don't teach you anything transferable.
- Most "AI secrets" content — If it promises hidden tricks, it's usually repackaging basics.
The best investment in 2026 isn't finding the perfect resource — it's building a practice of deliberate experimentation. Use a model, notice what works, understand why, apply that understanding to the next thing. The prompt library and the structured tracks here give you a foundation to experiment from.



