intermediate
24 results

Function Calling Explained: How AI Models Use Tools (With Real Examples)
Function calling lets LLMs request specific tool actions rather than just generating text. Here's how it works, when to use it, and practical examples in Python.

How RAG Works: The Plain-English Guide to Retrieval Augmented Generation
RAG is the most widely used technique in production AI. Here's a clear, jargon-free explanation of how it works, why it matters, and when to use it.

How to Prompt Reasoning Models: o1, o3, and Claude Extended Thinking
Reasoning models like OpenAI o1/o3 and Claude with extended thinking work differently from standard models. Here's what changes, what doesn't, and how to get the best results.

What is Context Engineering? The Term Replacing 'Prompt Engineering' in 2025
Context engineering is the practice of designing everything that goes into an AI's context window — not just the prompt. Here's why it matters and how to get better at it.
Retrieval Augmented Generation (RAG): Ground Your AI in Real Data
RAG connects an LLM to an external knowledge base so it answers from facts rather than memory. Learn how RAG works, when to use it, and how to prompt effectively in RAG systems.
Self-Consistency: Get Better Answers by Sampling Multiple Reasoning Paths
Self-consistency generates multiple chain-of-thought responses and takes the majority vote. Learn how it dramatically improves accuracy on reasoning tasks and when to use it.
Generate Knowledge Prompting: Let the Model Teach Itself Before Answering
Generate Knowledge Prompting has the LLM produce relevant facts and context before answering a question — dramatically improving accuracy by giving the model a better foundation to reason from.
Reflexion: Teach AI to Learn from Its Own Mistakes
Reflexion is a technique where an LLM evaluates its own output, identifies what went wrong, and generates an improved response — a powerful self-correction loop for complex tasks.

Chain of Thought Prompting: The Complete Guide
Learn how Chain of Thought (CoT) prompting forces AI models to reason step-by-step, dramatically improving results for math, logic, and complex reasoning tasks.
Few-Shot Prompting: Teaching AI by Example
Learn how to use few-shot prompting to dramatically improve AI output quality by showing the model exactly what you want through examples.
XML Tags & Delimiters: Structure Your Prompts Like a Pro
Learn how to use XML tags and delimiters to clearly separate instructions from data in your prompts — a technique that dramatically reduces errors on complex tasks.
Chain of Thought Prompting: Make AI Reason Step by Step
Chain of Thought (CoT) prompting forces AI to show its reasoning before answering — dramatically improving accuracy on logic, math, analysis, and multi-step tasks.
Avoiding Hallucinations: Keep AI Grounded in Facts
Learn what causes AI hallucinations and the specific prompting techniques that dramatically reduce fabricated facts, fake citations, and confidently wrong answers.
Constrained Generation: Force Structured Output
Learn how to make AI models reliably output JSON, XML, CSV, and other structured formats — essential for integrating AI into real applications and workflows.
System Prompts: Giving AI Standing Instructions
System prompts let you set persistent rules, persona, and context that apply to every message in a conversation. Learn how to write them effectively and when they change everything.
Prompting With Long Documents and Large Context
Pasting a 50-page document and asking 'what do you think?' rarely works. Learn how to structure prompts for long-form content, extract what matters, and work around context limits.
Multimodal Prompting: Images, Files, and Mixed Content
Modern AI models can see, read files, and process multiple input types at once. Learn how to structure prompts that work with images, documents, data files, and mixed content effectively.

System Prompts Explained: Give AI a Personality
Most people never touch system prompts. The ones who do get dramatically better results. Here's what they are, why they matter, and how to write one that actually works.

Prompting for Writers: How to Get AI to Match Your Voice
The biggest mistake writers make with AI is letting it sound like AI. Here's exactly how to train a model on your style and use it as a writing partner without losing what makes your work yours.

Zero-Shot vs Few-Shot Prompting: When to Use Which
Two of the most important prompting techniques — and most people don't even realize they're using them. Here's what they actually mean, when each one wins, and how to combine them.

How to Use AI for Coding (Even If You're Not a Developer)
AI has made coding accessible to people who never thought they'd write a line of code. But the gap between 'this doesn't work' and 'this works' is almost entirely in how you prompt. Here's what actually helps.

Prompting for Marketing: Copy That Doesn't Sound Fake
AI-generated marketing copy has a reputation for being generic and lifeless. That's a prompting problem. Here's how marketers can use AI to create sharper work — without losing what makes a brand distinctive.

Prompting for Data Analysis: Insights, Not Descriptions
Most people use AI to describe their data. Descriptions aren't insights. Here's how to prompt for analysis that actually helps you make decisions.

How I Replaced 4 Tools With One AI Workflow
A practical guide to building a personal AI workflow from scratch — covering system prompts, task routing, and the honest trade-offs of consolidating your tools.