OpenAI Provider

The OpenAI provider connects GoClaw to OpenAI models and any OpenAI-compatible API (LM Studio, LocalAI, OpenRouter, Kimi, etc.).

Configuration

OpenAI

{
  "llm": {
    "providers": {
      "openai": {
        "driver": "openai",
        "apiKey": "YOUR_API_KEY"
      }
    },
    "agent": {
      "models": ["openai/gpt-4o"]
    }
  }
}

Local Server (LM Studio)

{
  "llm": {
    "providers": {
      "lmstudio": {
        "driver": "openai",
        "baseURL": "http://localhost:1234"
      }
    },
    "agent": {
      "models": ["lmstudio/your-model-name"]
    }
  }
}

API key is optional for local servers.

OpenRouter

{
  "llm": {
    "providers": {
      "openrouter": {
        "driver": "openai",
        "apiKey": "YOUR_OPENROUTER_KEY",
        "baseURL": "https://openrouter.ai/api"
      }
    },
    "agent": {
      "models": ["openrouter/anthropic/claude-3-opus"]
    }
  }
}

OpenRouter requests include GoClaw attribution headers automatically.

Options

FieldTypeDefaultDescription
apiKeystring-API key (optional for local servers)
baseURLstringOpenAIAPI endpoint (auto-appends /v1 if needed)
maxTokensint-Output token limit
contextTokensintautoContext window override
timeoutSecondsint300Request timeout
embeddingOnlyboolfalseUse only for embeddings

Unknown Model Context Windows

For custom or newly released models that are not in models.json, GoClaw falls back to a conservative context window. If the model supports a larger context, set contextTokens on the provider:

{
  "llm": {
    "providers": {
      "openrouter1": {
        "driver": "openai",
        "subtype": "openrouter",
        "apiKey": "YOUR_OPENROUTER_KEY",
        "baseURL": "https://openrouter.ai/api/v1",
        "contextTokens": 262144,
        "maxTokens": 8192
      }
    }
  }
}

Use the model’s documented context window from the provider.

Compatible APIs

The OpenAI provider works with any API that follows the OpenAI chat completions format:

ServiceBase URLNotes
OpenAI(default)Official API
LM Studiohttp://localhost:1234Local inference
LocalAIhttp://localhost:8080Local inference
OpenRouterhttps://openrouter.ai/apiMulti-provider gateway
Kimihttps://api.moonshot.cnMoonshot AI
Together.aihttps://api.together.xyzCloud inference

Features

Tool Calling

Supports native function calling for models that implement it (GPT-4, GPT-4o, etc.).

Vision

Supports image inputs for vision-capable models.

Embeddings

Can be used for embeddings with models like text-embedding-3-small:

{
  "llm": {
    "providers": {
      "openai-embed": {
        "driver": "openai",
        "apiKey": "YOUR_API_KEY",
        "embeddingOnly": true
      }
    },
    "embeddings": {
      "models": ["openai-embed/text-embedding-3-small"]
    }
  }
}

Reasoning (OpenRouter)

When using OpenRouter with reasoning-capable models, thinking levels are mapped to OpenRouter’s reasoning.effort parameter.

Troubleshooting

Connection Refused (Local Server)

  1. Verify the server is running
  2. Check the port matches your config
  3. Ensure the server exposes an OpenAI-compatible endpoint

Model Not Found

The model name must match exactly what the API expects. For local servers, check the model name with:

curl http://localhost:1234/v1/models

Rate Limiting

The provider enters cooldown automatically on rate limits. Check status with /llm command.


See Also