Anthropic Provider
The Anthropic provider connects GoClaw to Claude models via the Anthropic API.
Configuration
{
"llm": {
"providers": {
"anthropic": {
"driver": "anthropic",
"apiKey": "YOUR_API_KEY",
"promptCaching": true
}
},
"agent": {
"models": ["anthropic/claude-sonnet-4-20250514"]
}
}
}
Note: The setup wizard (goclaw setup) can detect ANTHROPIC_API_KEY from your environment and offer to write it to config.
Provider Options
| Field | Type | Default | Description |
|---|---|---|---|
driver | string | - | Must be "anthropic" |
apiKey | string | - | Anthropic API key |
maxTokens | int | 200000 | Context window size |
promptCaching | bool | true | Enable prompt caching (reduces cost) |
timeoutSeconds | int | 300 | Request timeout |
Models are specified in the agent.models array using provider/model format (e.g., anthropic/claude-sonnet-4-20250514).
Models
| Model | Context | Best For |
|---|---|---|
claude-sonnet-4-20250514 | 200k | General agent use, balanced |
claude-opus-4-20250514 | 200k | Complex reasoning |
claude-3-haiku-20240307 | 200k | Fast, cheap summarization |
claude-3-5-sonnet-20241022 | 200k | Previous generation |
Features
Prompt Caching
When promptCaching: true, the system prompt is cached server-side by Anthropic. This reduces costs by up to 90% for repeated requests with the same system prompt.
Cache expires after 5 minutes of inactivity.
Extended Thinking
Anthropic supports extended thinking (reasoning) on Claude 3.5+ models. Configure per-user in users.json:
{
"users": [
{
"name": "Alice",
"role": "owner",
"thinking": true,
"thinkingLevel": "medium"
}
]
}
| Level | Token Budget | Use Case |
|---|---|---|
off | 0 | Disabled |
minimal | 1,024 | Quick responses |
low | 4,096 | Light reasoning |
medium | 10,000 | Balanced (default) |
high | 25,000 | Deep reasoning |
xhigh | 50,000 | Maximum effort |
Multi-Provider Setup
For setups with multiple providers:
{
"llm": {
"providers": {
"claude": {
"driver": "anthropic",
"apiKey": "YOUR_API_KEY",
"promptCaching": true
},
"ollama": {
"driver": "ollama",
"url": "http://localhost:11434"
}
},
"agent": {
"models": ["claude/claude-sonnet-4-20250514"]
},
"summarization": {
"models": ["ollama/qwen2.5:7b"]
}
}
}
Troubleshooting
“invalid_api_key”
Verify your API key:
- Check it starts with
sk-ant- - Verify it’s not expired in the Anthropic console
- Check the
apiKeyfield ingoclaw.json
Rate Limiting
If you hit rate limits, the provider enters cooldown with automatic retry. Check status with /llm command.
Model Not Available
Some models require specific API access levels. Check your Anthropic account permissions.
See Also
- LLM Providers — Provider overview
- Configuration — Full config reference