Most OpenClaw users run a single instance for personal use. But for complex, high-volume, or specialised workflows, running multiple agents in parallel or in a coordinated pipeline changes what's possible.
Two Approaches to Multi-Agent Workflows
Approach 1: Parallel Subtask Execution (Single Instance)
OpenClaw can decompose a complex task into subtasks and execute them in parallel within a single instance, using different LLM calls:
# ~/.openclaw/config/config.yml
orchestration:
parallel_subtasks: true
max_parallel: 3 # Max simultaneous LLM calls
subtask_timeout: 60 # Seconds per subtask
When you send a complex request:
Research these five competitors and give me a comparison:
[Company A], [Company B], [Company C], [Company D], [Company E]
With parallel execution enabled, OpenClaw splits this into 5 simultaneous research subtasks instead of researching each one sequentially. Completion time drops from ~5x to ~1x the per-company research time.
Approach 2: Multiple Instances (Specialist Agents)
Run separate OpenClaw instances with different roles, each with its own SOUL.md and memory:
~/.openclaw-work/ # Work-focused agent
~/.openclaw-personal/ # Personal assistant
~/.openclaw-research/ # Research specialist
Each instance has different:
- LLM model (e.g. Claude Sonnet for research, GPT-4o-mini for personal quick tasks)
- SOUL.md personality and rules
- Integrations (work agent has GitHub and Slack; personal has Spotify and WhatsApp)
- Memory store (separated by domain)
Setting Up Specialist Instances
Create a second config directory:
mkdir -p ~/.openclaw-research/config
cp ~/.openclaw/config/config.yml ~/.openclaw-research/config/
cp ~/.openclaw/config/providers.yml ~/.openclaw-research/config/
Create a research-specific SOUL.md:
nano ~/.openclaw-research/config/soul.md
# Research Agent
## Role
You are a specialist research assistant. Your only job is to research topics thoroughly,
synthesise sources, and return structured findings. You do not manage tasks, send emails,
or take actions outside research.
## Behaviour
- Always cite sources or note when a claim is uncertain
- Prefer breadth then depth: overview first, detail on request
- Flag contradictions between sources explicitly
- Never recommend without evidence
## Output format
- Lead with a 2-sentence summary
- Follow with structured findings
- End with "open questions" — what you couldn't confirm
Start the research instance:
OPENCLAW_CONFIG=~/.openclaw-research openclaw start --port 3001
Orchestrator Pattern
Configure your primary instance as an orchestrator that delegates to specialist instances:
# ~/.openclaw/config/agents.yml
agents:
research:
url: "http://localhost:3001"
auth_token: "research-agent-token"
capabilities: ["research", "analysis", "summarise"]
coding:
url: "http://localhost:3002"
auth_token: "coding-agent-token"
capabilities: ["code", "debug", "review", "github"]
The orchestrator automatically routes requests to the right agent:
# This message goes to the research agent
Research the current state of vector databases for my AI infrastructure review.
# This message goes to the coding agent
Review the pull request at github.com/myorg/myrepo/pull/42
The primary agent decomposes mixed requests:
# This triggers both agents in parallel
Research best practices for API rate limiting AND
draft a code review comment for the rate limiter in PR #42.
Pipeline Workflows
For sequential multi-step tasks where each step depends on the previous:
workflows:
competitive_report:
name: "Competitive Analysis Report"
trigger: "run competitive report on {companies}"
steps:
- agent: "research"
task: "Research each company: {companies}. Return structured profiles."
output_key: "profiles"
- agent: "primary"
task: "Compare these profiles and identify our strongest differentiators: {profiles}"
output_key: "comparison"
- agent: "primary"
task: "Draft an executive summary based on: {comparison}"
output_key: "summary"
- action: "notion.create_page"
title: "Competitive Report {date}"
content: "{summary}"
Trigger it with:
Run competitive report on [Company A, Company B, Company C]
OpenClaw executes the pipeline, passing outputs between steps automatically.
Memory Isolation vs Shared Memory
By default, multiple instances have separate memory stores. This is usually correct — your work agent's memory shouldn't pollute your personal agent's memory.
For shared knowledge (e.g. a project fact base both agents can reference):
memory:
shared_store: "~/.openclaw-shared/memory" # Path both instances read/write
private_store: "~/.openclaw/memory" # Instance-private memory
Rate Limiting and Cost Control
Parallel agents hit your LLM API simultaneously. With 3 parallel subtasks each consuming 1000 tokens, you're sending 3000 tokens at once vs 3000 tokens sequentially. Total cost is the same, but rate limit implications differ.
Monitor usage:
# Check per-instance token usage
openclaw stats --config ~/.openclaw-research
openclaw stats --config ~/.openclaw
Set per-instance monthly limits:
# In each instance's config.yml
llm:
monthly_token_limit: 500000 # Hard cap to prevent runaway costs
warn_at_percent: 80
When Multi-Agent Is Overkill
For most personal users, a single OpenClaw instance with good SOUL.md and prompt design handles everything they need. Multi-agent setups add:
- More infrastructure to maintain
- Higher operational complexity
- More opportunities for things to go wrong
Start with a well-configured single instance. Move to multi-agent when you hit real limits — typically when you need genuinely different personalities/rules for different domains, or when sequential task execution is too slow.
Related reading: