10 KiB
| summary | read_when | title | ||
|---|---|---|---|---|
| Use OpenAI via API keys or Codex subscription in OpenClaw |
|
OpenAI |
OpenAI
OpenAI provides developer APIs for GPT models. Codex supports ChatGPT sign-in for subscription access or API key sign-in for usage-based access. Codex cloud requires ChatGPT sign-in. OpenAI explicitly supports subscription OAuth usage in external tools/workflows like OpenClaw.
Default interaction style
OpenClaw adds a small OpenAI-specific prompt overlay by default for both
openai/* and openai-codex/* runs. The overlay keeps the assistant warm,
collaborative, concise, and direct without replacing the base OpenClaw system
prompt.
Config key:
plugins.entries.openai.config.personalityOverlay
Allowed values:
"friendly": default; enable the OpenAI-specific overlay."off": disable the overlay and use the base OpenClaw prompt only.
Scope:
- Applies to
openai/*models. - Applies to
openai-codex/*models. - Does not affect other providers.
This behavior is enabled by default:
{
plugins: {
entries: {
openai: {
config: {
personalityOverlay: "friendly",
},
},
},
},
}
Disable the OpenAI prompt overlay
If you prefer the unmodified base OpenClaw prompt, turn the overlay off:
{
plugins: {
entries: {
openai: {
config: {
personalityOverlay: "off",
},
},
},
},
}
You can also set it directly with the config CLI:
openclaw config set plugins.entries.openai.config.personalityOverlay off
Option A: OpenAI API key (OpenAI Platform)
Best for: direct API access and usage-based billing. Get your API key from the OpenAI dashboard.
CLI setup
openclaw onboard --auth-choice openai-api-key
# or non-interactive
openclaw onboard --openai-api-key "$OPENAI_API_KEY"
Config snippet
{
env: { OPENAI_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "openai/gpt-5.4" } } },
}
OpenAI's current API model docs list gpt-5.4 and gpt-5.4-pro for direct
OpenAI API usage. OpenClaw forwards both through the openai/* Responses path.
OpenClaw intentionally suppresses the stale openai/gpt-5.3-codex-spark row,
because direct OpenAI API calls reject it in live traffic.
OpenClaw does not expose openai/gpt-5.3-codex-spark on the direct OpenAI
API path. pi-ai still ships a built-in row for that model, but live OpenAI API
requests currently reject it. Spark is treated as Codex-only in OpenClaw.
Option B: OpenAI Code (Codex) subscription
Best for: using ChatGPT/Codex subscription access instead of an API key. Codex cloud requires ChatGPT sign-in, while the Codex CLI supports ChatGPT or API key sign-in.
CLI setup (Codex OAuth)
# Run Codex OAuth in the wizard
openclaw onboard --auth-choice openai-codex
# Or run OAuth directly
openclaw models auth login --provider openai-codex
Config snippet (Codex subscription)
{
agents: { defaults: { model: { primary: "openai-codex/gpt-5.4" } } },
}
OpenAI's current Codex docs list gpt-5.4 as the current Codex model. OpenClaw
maps that to openai-codex/gpt-5.4 for ChatGPT/Codex OAuth usage.
If your Codex account is entitled to Codex Spark, OpenClaw also supports:
openai-codex/gpt-5.3-codex-spark
OpenClaw treats Codex Spark as Codex-only. It does not expose a direct
openai/gpt-5.3-codex-spark API-key path.
OpenClaw also preserves openai-codex/gpt-5.3-codex-spark when pi-ai
discovers it. Treat it as entitlement-dependent and experimental: Codex Spark is
separate from GPT-5.4 /fast, and availability depends on the signed-in Codex /
ChatGPT account.
Codex context window cap
OpenClaw treats the Codex model metadata and the runtime context cap as separate values.
For openai-codex/gpt-5.4:
- native
contextWindow:1050000 - default runtime
contextTokenscap:272000
That keeps model metadata truthful while preserving the smaller default runtime window that has better latency and quality characteristics in practice.
If you want a different effective cap, set models.providers.<provider>.models[].contextTokens:
{
models: {
providers: {
"openai-codex": {
models: [
{
id: "gpt-5.4",
contextTokens: 160000,
},
],
},
},
},
}
Use contextWindow only when you are declaring or overriding native model
metadata. Use contextTokens when you want to limit the runtime context budget.
Transport default
OpenClaw uses pi-ai for model streaming. For both openai/* and
openai-codex/*, default transport is "auto" (WebSocket-first, then SSE
fallback).
You can set agents.defaults.models.<provider/model>.params.transport:
"sse": force SSE"websocket": force WebSocket"auto": try WebSocket, then fall back to SSE
For openai/* (Responses API), OpenClaw also enables WebSocket warm-up by
default (openaiWsWarmup: true) when WebSocket transport is used.
Related OpenAI docs:
{
agents: {
defaults: {
model: { primary: "openai-codex/gpt-5.4" },
models: {
"openai-codex/gpt-5.4": {
params: {
transport: "auto",
},
},
},
},
},
}
OpenAI WebSocket warm-up
OpenAI docs describe warm-up as optional. OpenClaw enables it by default for
openai/* to reduce first-turn latency when using WebSocket transport.
Disable warm-up
{
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
params: {
openaiWsWarmup: false,
},
},
},
},
},
}
Enable warm-up explicitly
{
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
params: {
openaiWsWarmup: true,
},
},
},
},
},
}
OpenAI and Codex priority processing
OpenAI's API exposes priority processing via service_tier=priority. In
OpenClaw, set agents.defaults.models["<provider>/<model>"].params.serviceTier
to pass that field through on native OpenAI/Codex Responses endpoints.
{
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
params: {
serviceTier: "priority",
},
},
"openai-codex/gpt-5.4": {
params: {
serviceTier: "priority",
},
},
},
},
},
}
Supported values are auto, default, flex, and priority.
OpenClaw forwards params.serviceTier to both direct openai/* Responses
requests and openai-codex/* Codex Responses requests when those models point
at the native OpenAI/Codex endpoints.
Important behavior:
- direct
openai/*must targetapi.openai.com openai-codex/*must targetchatgpt.com/backend-api- if you route either provider through another base URL or proxy, OpenClaw leaves
service_tieruntouched
OpenAI fast mode
OpenClaw exposes a shared fast-mode toggle for both openai/* and
openai-codex/* sessions:
- Chat/UI:
/fast status|on|off - Config:
agents.defaults.models["<provider>/<model>"].params.fastMode
When fast mode is enabled, OpenClaw maps it to OpenAI priority processing:
- direct
openai/*Responses calls toapi.openai.comsendservice_tier = "priority" openai-codex/*Responses calls tochatgpt.com/backend-apialso sendservice_tier = "priority"- existing payload
service_tiervalues are preserved - fast mode does not rewrite
reasoningortext.verbosity
Example:
{
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
params: {
fastMode: true,
},
},
"openai-codex/gpt-5.4": {
params: {
fastMode: true,
},
},
},
},
},
}
Session overrides win over config. Clearing the session override in the Sessions UI returns the session to the configured default.
Native OpenAI versus OpenAI-compatible routes
OpenClaw treats direct OpenAI, Codex, and Azure OpenAI endpoints differently
from generic OpenAI-compatible /v1 proxies:
- native
openai/*,openai-codex/*, and Azure OpenAI routes keepreasoning: { effort: "none" }intact when you explicitly disable reasoning - native OpenAI-family routes default tool schemas to strict mode
- proxy-style OpenAI-compatible routes keep the looser compat behavior and do not force strict tool schemas or native-only request shaping
This preserves current native OpenAI Responses behavior without forcing older
OpenAI-compatible shims onto third-party /v1 backends.
OpenAI Responses server-side compaction
For direct OpenAI Responses models (openai/* using api: "openai-responses" with
baseUrl on api.openai.com), OpenClaw now auto-enables OpenAI server-side
compaction payload hints:
- Forces
store: true(unless model compat setssupportsStore: false) - Injects
context_management: [{ type: "compaction", compact_threshold: ... }]
By default, compact_threshold is 70% of model contextWindow (or 80000
when unavailable).
Enable server-side compaction explicitly
Use this when you want to force context_management injection on compatible
Responses models (for example Azure OpenAI Responses):
{
agents: {
defaults: {
models: {
"azure-openai-responses/gpt-5.4": {
params: {
responsesServerCompaction: true,
},
},
},
},
},
}
Enable with a custom threshold
{
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
params: {
responsesServerCompaction: true,
responsesCompactThreshold: 120000,
},
},
},
},
},
}
Disable server-side compaction
{
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
params: {
responsesServerCompaction: false,
},
},
},
},
},
}
responsesServerCompaction only controls context_management injection.
Direct OpenAI Responses models still force store: true unless compat sets
supportsStore: false.
Notes
- Model refs always use
provider/model(see /concepts/models). - Auth details + reuse rules are in /concepts/oauth.