advanced
21 results

AI Agent Design Patterns: ReAct, Plan-and-Execute, and Reflexion Explained
The three most important agent architectures — ReAct, Plan-and-Execute, and Reflexion — each solve different problems. Learn when to use which and how they work in practice.
ReAct Prompting: Reasoning + Acting in a Loop
ReAct interleaves reasoning (Thought) and action (Act) steps so an AI agent can plan, use tools, and adjust its approach based on real-world feedback — all within a single prompt loop.
Automatic Prompt Engineer (APE): Let AI Optimize Your Prompts
Automatic Prompt Engineer uses an LLM to generate and evaluate candidate prompts, then selects the highest-performing version — turning prompt optimization into an automated search problem.
Program-Aided Language Models (PAL): Offload Computation to Code
PAL has an LLM write code to solve problems instead of computing answers directly — eliminating arithmetic errors and enabling complex calculations that pure language models consistently get wrong.
Context Engineering: The 2025 Evolution of Prompt Engineering
Context engineering is the art of designing what goes into an LLM's context window — beyond just the prompt. Learn how to structure memory, tools, retrieved data, and conversation history to build reliable AI systems.

OpenClaw Browser Relay: What It Is and How to Set It Up
OpenClaw's browser relay lets your AI agent control a real browser — taking screenshots, clicking elements, filling forms, and navigating pages. Here's how it works and when to use it.

OpenClaw Hooks Explained: Automate Actions on Any Event
OpenClaw's hooks system lets you trigger shell commands, scripts, or API calls on specific events — messages received, actions taken, or scheduled times. This guide covers every hook type with practical examples.

OpenClaw Multi-Agent Workflows: Parallel AI Task Execution
How to run multiple OpenClaw instances or use the orchestration layer to parallelise tasks, assign specialised agents, and build reliable multi-step AI workflows.
Agentic Prompting: Designing Prompts for AI Agents
AI agents don't just answer questions — they plan, use tools, and take multi-step actions. Learn how to design prompts that make autonomous AI systems reliable, safe, and effective.
Prompt Compression & Token Efficiency
Shorter prompts cost less, run faster, and often produce better results. Learn how to reduce token usage without sacrificing output quality — and how to measure when compression is hurting you.

Building Custom Skills and Plugins for OpenClaw
OpenClaw's skill system lets you add any capability your AI doesn't have by default. This guide covers building skills from scratch — REST API calls, database lookups, shell commands, and publishing to the community.
Prompt Chaining: Build Multi-Step AI Workflows
Learn how to break complex tasks into a sequence of focused prompts where each output feeds the next — unlocking tasks that a single prompt can't reliably handle.
Prompt Evaluation: Test and Improve Prompts Scientifically
Move beyond 'this looks good' — learn how to build evaluation frameworks that measure prompt performance with real metrics, A/B testing, and golden datasets.
Tree of Thought: Multi-Path Reasoning for Complex Problems
Tree of Thought prompting extends Chain of Thought by exploring multiple reasoning paths simultaneously — dramatically improving performance on complex planning, creative, and decision-making tasks.
Meta-Prompting: Using AI to Write Better Prompts
One of the most powerful techniques at the advanced level is turning AI on itself — using it to generate, critique, and optimize your prompts. Here's how meta-prompting works and when to use it.
Adversarial Prompting and Red-Teaming Your AI Systems
If you're building anything with AI — a chatbot, a workflow, an automated system — you need to know how it fails under adversarial conditions. Here's how to think about it and what to do about it.
Fine-Tuning vs Prompting: When to Use Which
Prompt engineering and fine-tuning are both tools for getting AI to behave a specific way. Understanding when each makes sense — and the real trade-offs — helps you avoid expensive mistakes.

LangChain vs LangGraph: Which One Should You Use?
LangChain handles linear pipelines. LangGraph handles everything that needs loops, branching, or persistent state. Here's the exact decision framework — with code showing when each breaks and why.

Structured Output from AI APIs: JSON Every Time
How to reliably get JSON, typed objects, and formatted data from ChatGPT, Claude, and Gemini APIs. Covers response_format, tool use, Pydantic, and every technique that actually works in production.

LangGraph: Build Stateful AI Agents That Actually Work
LangGraph extends LangChain with graph-based agent architecture — nodes, edges, state, and cycles. Learn how to build reliable multi-step AI agents with real Python code examples.

LangChain Explained: Build LLM Apps Without Boilerplate
LangChain is the most widely used framework for building applications on top of LLMs. This guide covers chains, prompt templates, output parsers, and LCEL — with real Python code snippets throughout.