agents
8 results

Build Your First AI Agent: A Beginner's Step-by-Step Guide
Build a working AI agent from scratch — one that can use tools, make decisions, and complete multi-step tasks. No prior agent experience needed.

Function Calling Explained: How AI Models Use Tools (With Real Examples)
Function calling lets LLMs request specific tool actions rather than just generating text. Here's how it works, when to use it, and practical examples in Python.

What is the Model Context Protocol (MCP)? A Plain-English Guide
MCP is Anthropic's open standard for connecting AI assistants to external tools and data sources. Here's what it is, how it works, and why it matters for AI developers.
Reflexion: Teach AI to Learn from Its Own Mistakes
Reflexion is a technique where an LLM evaluates its own output, identifies what went wrong, and generates an improved response — a powerful self-correction loop for complex tasks.
ReAct Prompting: Reasoning + Acting in a Loop
ReAct interleaves reasoning (Thought) and action (Act) steps so an AI agent can plan, use tools, and adjust its approach based on real-world feedback — all within a single prompt loop.
Context Engineering: The 2025 Evolution of Prompt Engineering
Context engineering is the art of designing what goes into an LLM's context window — beyond just the prompt. Learn how to structure memory, tools, retrieved data, and conversation history to build reliable AI systems.

LangChain vs LangGraph: Which One Should You Use?
LangChain handles linear pipelines. LangGraph handles everything that needs loops, branching, or persistent state. Here's the exact decision framework — with code showing when each breaks and why.

LangGraph: Build Stateful AI Agents That Actually Work
LangGraph extends LangChain with graph-based agent architecture — nodes, edges, state, and cycles. Learn how to build reliable multi-step AI agents with real Python code examples.