xAI's Grok models are now available through the xAI API, and because the API is OpenAI-compatible, connecting OpenClaw to Grok is straightforward. Here's the setup.
Step 1: Get an xAI API Key
- Go to console.x.ai
- Sign in with your X (Twitter) account
- Navigate to API Keys
- Click Create API Key — name it "OpenClaw"
- Copy the key
Add credits to your account under Billing. xAI's API uses pay-per-token billing.
Step 2: Configure OpenClaw
Because xAI's API is OpenAI-compatible, you can use OpenClaw's OpenAI provider type with a custom base URL:
providers:
xai:
api_key: "xai-your-key-here"
base_url: "https://api.x.ai/v1"
default_model: "grok-2"
models:
- id: "grok-2"
max_tokens: 8192
- id: "grok-2-mini"
max_tokens: 4096
Set as active in config.yml:
llm:
active_provider: "xai"
active_model: "grok-2"
Restart OpenClaw and test with a message.
Available Grok Models
| Model | Notes |
|---|---|
grok-2 | Full-capability model, comparable to GPT-4o tier |
grok-2-mini | Faster, cheaper, lighter tasks |
grok-2-vision | Supports image inputs |
Check api.x.ai for the latest model list — xAI updates its lineup frequently.
Where Grok Works Well for OpenClaw
Direct responses. Grok tends toward concise, direct answers with less hedging. If you're frustrated by overly cautious AI responses, Grok leans the other way.
Technical tasks. Grok-2 performs well on coding, debugging, and technical reasoning tasks. Competitive with GPT-4o for most developer use cases.
No topic avoidance. Grok is less likely to refuse to engage with topics that other models decline. For some users this is a feature; for others it's irrelevant.
Where Grok Has Limitations
Instruction consistency. Compared to Claude, Grok is slightly less consistent at following complex SOUL.md instruction sets across long conversations. If your personality file has detailed rules, monitor the first week closely.
Ecosystem maturity. The xAI API is newer than OpenAI or Anthropic. Occasional API quirks or unexpected changes are more likely.
No free tier. Unlike Google AI Studio's Gemini free tier, xAI API usage is billed from the start.
Running Grok Alongside Other Providers
You're not locked into one provider. Keep multiple providers configured and route by task type:
providers:
xai:
api_key: "xai-key"
base_url: "https://api.x.ai/v1"
default_model: "grok-2"
anthropic:
api_key: "sk-ant-key"
default_model: "claude-sonnet-4-5"
llm:
routing:
- pattern: "^(code|debug|technical|build|review)"
provider: "xai"
model: "grok-2"
- default:
provider: "anthropic"
model: "claude-sonnet-4-5"
Is Grok the Right Choice for You?
Grok makes sense if:
- You're an X/Twitter power user and aligned with the xAI ecosystem
- You prefer more direct, less filtered responses
- You want to try a frontier alternative to OpenAI and Anthropic
- You're running a technical/developer-focused OpenClaw setup
Stick with GPT-4o or Claude Sonnet if:
- You want the most consistent, well-documented API behaviour
- Your SOUL.md has complex instructions that require reliable instruction-following
- You want access to a free tier (use Gemini Flash instead)
Related reading: