A Raspberry Pi running OpenClaw 24/7 is one of the most cost-efficient personal AI agent setups possible. After the initial hardware cost, your ongoing expenses are electricity (~$2–5/year) and whatever LLM API you choose.
Hardware Requirements
| Model | RAM | Status |
|---|---|---|
| Pi 5 (4GB/8GB) | 4–8GB | ★★★★★ Best choice |
| Pi 4 (8GB) | 8GB | ★★★★★ Excellent |
| Pi 4 (4GB) | 4GB | ★★★★☆ Good — cloud LLM only |
| Pi 4 (2GB) | 2GB | ★★★☆☆ Works, but tight |
| Pi 3 B+ | 1GB | ★★☆☆☆ Too slow for practical use |
For cloud LLM APIs (GPT-4o, Claude, Gemini): Pi 4 4GB works well. For local models via Ollama: Pi 5 or Pi 4 8GB is the minimum worth using.
Storage: Use a fast microSD (Class 10 / A2) or, better, an SSD via USB3 adapter. Constant read/writes to a slow SD card degrade performance and card longevity.
OS Setup
Use Raspberry Pi OS Lite (64-bit, Debian Bookworm) — no desktop environment needed.
Flash it with Raspberry Pi Imager. Before writing, configure:
- Hostname:
openclaw-pi(or your choice) - SSH: enabled
- Username/password or SSH key
Boot the Pi and SSH in:
ssh pi@openclaw-pi.local
Update the system:
sudo apt update && sudo apt upgrade -y
Install Node.js
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get install -y nodejs
node --version
Install OpenClaw
npm install -g openclaw
openclaw init
Walk through the init wizard. Your config will be at ~/.openclaw/config/.
Configure your LLM provider — for the Pi, Gemini Flash or GPT-4o-mini are recommended for cost efficiency:
# ~/.openclaw/config/providers.yml
providers:
gemini:
api_key: "AIza-your-key"
default_model: "gemini-2.0-flash"
Run as a systemd Service
sudo nano /etc/systemd/system/openclaw.service
[Unit]
Description=OpenClaw Personal AI Agent
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=pi
ExecStart=/usr/bin/openclaw start
Restart=always
RestartSec=10
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable openclaw
sudo systemctl start openclaw
sudo systemctl status openclaw
OpenClaw now starts automatically on boot and restarts if it crashes.
Optional: Local Models with Ollama
For fully offline, zero-API-cost operation, install Ollama on the Pi:
curl -fsSL https://ollama.ai/install.sh | sh
Pull a small model suitable for Pi hardware:
ollama pull llama3.2:3b-instruct-q4_K_M # ~2GB, reasonable quality
# Or for Pi 5 / 8GB Pi 4:
ollama pull llama3.1:8b-instruct-q4_K_M # ~5GB, better quality
Configure OpenClaw to use Ollama:
providers:
ollama:
base_url: "http://localhost:11434/v1"
api_key: "ollama"
default_model: "llama3.2:3b-instruct-q4_K_M"
Realistic performance on Pi 5 with 8B model: 4–8 tokens/second. A 150-token response takes ~20–35 seconds. Usable for non-urgent tasks; too slow for conversational use.
For conversational use on a Pi, use a cloud API. Reserve local models for batch processing or when you genuinely need fully offline operation.
Performance Tuning
Reduce memory usage:
# ~/.openclaw/config/config.yml
system:
memory_cache_size: 50 # Reduce from default 200
context_history_limit: 20 # Reduce stored context
Swap file for safety (Pi 4 4GB):
sudo dphys-swapfile swapoff
sudo nano /etc/dphys-swapfile
# Set CONF_SWAPSIZE=1024
sudo dphys-swapfile setup
sudo dphys-swapfile swapon
SSD instead of microSD: This alone makes the biggest practical difference for Pi reliability. An SSD connected via USB3 adapter gives 10–20x faster I/O.
Monitoring
Check OpenClaw logs:
sudo journalctl -u openclaw -f
Monitor resource usage:
htop
# Or
vcgencmd measure_temp # Check CPU temperature
Healthy idle temperature: below 60°C. Above 80°C under sustained load — add a heatsink/fan.
Accessing OpenClaw Remotely
If you want to access the Pi's OpenClaw dashboard from outside your home network, use a reverse proxy with authentication. The simplest reliable approach:
# On Pi — install cloudflared for a free tunnel
curl -L https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm64 -o cloudflared
chmod +x cloudflared
./cloudflared tunnel --url http://localhost:3000
This gives you a public HTTPS URL without opening ports on your router. Add authentication in the Cloudflare Zero Trust dashboard.
Related reading: