Google's Gemini API is one of the most cost-effective LLM options for OpenClaw, especially at the Flash tier. This guide walks through the full setup from API key to first response.
If you haven't installed OpenClaw yet, start with the getting started guide.
Step 1: Get Your Gemini API Key
- Go to aistudio.google.com
- Sign in with your Google account
- Click Get API key in the left sidebar
- Click Create API key → select a Google Cloud project (or create a new one)
- Copy the key — it starts with
AIza
Google AI Studio's free tier covers substantial usage. You don't need to add a credit card unless you exceed rate limits.
Step 2: Configure OpenClaw
Add Gemini to your providers config at ~/.openclaw/config/providers.yml:
providers:
gemini:
api_key: "AIza-your-key-here"
default_model: "gemini-2.0-flash"
base_url: "https://generativelanguage.googleapis.com/v1beta"
models:
- id: "gemini-2.0-flash"
max_tokens: 8192
- id: "gemini-1.5-pro"
max_tokens: 8192
- id: "gemini-1.5-flash"
max_tokens: 8192
Set Gemini as your active provider in config.yml:
llm:
active_provider: "gemini"
active_model: "gemini-2.0-flash"
Restart and test:
openclaw restart
# Then send a test message through your WhatsApp/Telegram connection
Choosing Between Gemini Models
| Model | Quality | Speed | Cost | Best For |
|---|---|---|---|---|
gemini-2.0-flash | Very good | Very fast | Near-free (free tier) | Default for most daily use |
gemini-1.5-flash | Good | Fast | Very cheap | High-volume, simple tasks |
gemini-1.5-pro | Excellent | Moderate | Paid | Complex analysis, long documents |
gemini-ultra | Best | Slower | Higher | Heavy research, complex reasoning |
For most personal OpenClaw deployments, gemini-2.0-flash as the default hits the sweet spot — genuinely capable responses at minimal cost.
Smart Routing: Flash for Most, Pro for Heavy Tasks
Configure model routing to keep costs down:
llm:
routing:
- pattern: "^(analyse|research|summarise|compare|draft long|explain in depth)"
model: "gemini-1.5-pro"
- default:
model: "gemini-2.0-flash"
This means quick questions and everyday tasks use Flash, while explicitly complex requests escalate to Pro.
Free Tier vs Paid: What You Get
Google's free tier for AI Studio includes:
- Gemini 1.5 Flash: 15 requests/minute, 1 million tokens/day
- Gemini 1.5 Pro: 2 requests/minute, 50k tokens/day
For personal OpenClaw use (30–50 messages/day), Flash's free tier is typically sufficient. If you hit rate limits, you'll see timeout errors — the fix is enabling billing or switching to a paid tier.
Paid rates (as of early 2026):
- Gemini 1.5 Flash: ~$0.075 per 1M input tokens
- Gemini 1.5 Pro: ~$3.50 per 1M input tokens
For personal use on paid tier, expect ~$1–8/month depending on volume and model.
What Gemini Does Well in OpenClaw
Cost. Gemini Flash is the cheapest capable model available. If you're cost-sensitive, this is your default.
Speed. Flash responses are fast — often 1–2 seconds. Useful when you want quick replies in WhatsApp.
Google Workspace context. If your SOUL.md or skills reference Google services, Gemini tends to have better native understanding of Google product names, interfaces, and workflows.
Long context. Gemini Pro's 1M token context window is the largest available for complex document processing tasks.
Potential Limitations
Instruction-following consistency. Gemini is capable but slightly less consistent than Claude at following complex, multi-rule SOUL.md instructions. If your SOUL.md is detailed, test it thoroughly.
Tool/function calling. Gemini's tool call support is solid for standard integrations but can have edge cases with complex skill invocations. Monitor the first few days for any misfired automations.
Rate limits on the free tier. The free tier's 2 requests/minute limit for Pro means it's only suitable for occasional use, not frequent conversations.
Running Gemini Alongside Other Providers
You can keep Gemini as a fallback alongside your primary provider:
providers:
gemini:
api_key: "AIza-gemini-key"
default_model: "gemini-2.0-flash"
openai:
api_key: "sk-openai-key"
default_model: "gpt-4o"
llm:
active_provider: "openai"
fallback_provider: "gemini"
If OpenAI returns an error or rate limit, OpenClaw automatically falls back to Gemini. Useful for keeping things running without intervention.
Related reading: