docs: clarify adaptive thinking and openai websocket docs

This commit is contained in:
Peter Steinberger 2026-03-02 05:16:09 +00:00
parent e1e715c53d
commit bc0288bcfb
4 changed files with 25 additions and 8 deletions

View File

@ -874,6 +874,7 @@ Your configured aliases always win over defaults.
Z.AI GLM-4.x models automatically enable thinking mode unless you set `--thinking off` or define `agents.defaults.models["zai/<model>"].params.thinking` yourself.
Z.AI models enable `tool_stream` by default for tool call streaming. Set `agents.defaults.models["zai/<model>"].params.tool_stream` to `false` to disable it.
Anthropic Claude 4.6 models default to `adaptive` thinking when no explicit thinking level is set.
### `agents.defaults.cliBackends`

View File

@ -35,6 +35,15 @@ openclaw onboard --anthropic-api-key "$ANTHROPIC_API_KEY"
}
```
## Thinking defaults (Claude 4.6)
- Anthropic Claude 4.6 models default to `adaptive` thinking in OpenClaw when no explicit thinking level is set.
- You can override per-message (`/think:<level>`) or in model params:
`agents.defaults.models["anthropic/<model>"].params.thinking`.
- Related Anthropic docs:
- https://platform.claude.com/docs/en/build-with-claude/adaptive-thinking
- https://platform.claude.com/docs/en/build-with-claude/extended-thinking
## Prompt caching (Anthropic API)
OpenClaw supports Anthropic's prompt caching feature. This is **API-only**; subscription auth does not honor cache settings.

View File

@ -29,7 +29,7 @@ openclaw onboard --openai-api-key "$OPENAI_API_KEY"
```json5
{
env: { OPENAI_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "openai/gpt-5.1-codex" } } },
agents: { defaults: { model: { primary: "openai/gpt-5.2" } } },
}
```
@ -71,6 +71,11 @@ You can set `agents.defaults.models.<provider/model>.params.transport`:
For `openai/*` (Responses API), OpenClaw also enables WebSocket warm-up by
default (`openaiWsWarmup: true`) when WebSocket transport is used.
Related OpenAI docs:
- Realtime API with WebSocket: https://platform.openai.com/docs/guides/realtime-websocket
- Streaming API responses (SSE): https://platform.openai.com/docs/guides/streaming-responses
```json5
{
agents: {
@ -100,7 +105,7 @@ OpenAI docs describe warm-up as optional. OpenClaw enables it by default for
agents: {
defaults: {
models: {
"openai/gpt-5": {
"openai/gpt-5.2": {
params: {
openaiWsWarmup: false,
},
@ -118,7 +123,7 @@ OpenAI docs describe warm-up as optional. OpenClaw enables it by default for
agents: {
defaults: {
models: {
"openai/gpt-5": {
"openai/gpt-5.2": {
params: {
openaiWsWarmup: true,
},
@ -151,7 +156,7 @@ Responses models (for example Azure OpenAI Responses):
agents: {
defaults: {
models: {
"azure-openai-responses/gpt-4o": {
"azure-openai-responses/gpt-5.2": {
params: {
responsesServerCompaction: true,
},
@ -169,7 +174,7 @@ Responses models (for example Azure OpenAI Responses):
agents: {
defaults: {
models: {
"openai/gpt-5": {
"openai/gpt-5.2": {
params: {
responsesServerCompaction: true,
responsesCompactThreshold: 120000,
@ -188,7 +193,7 @@ Responses models (for example Azure OpenAI Responses):
agents: {
defaults: {
models: {
"openai/gpt-5": {
"openai/gpt-5.2": {
params: {
responsesServerCompaction: false,
},

View File

@ -10,15 +10,17 @@ title: "Thinking Levels"
## What it does
- Inline directive in any inbound body: `/t <level>`, `/think:<level>`, or `/thinking <level>`.
- Levels (aliases): `off | minimal | low | medium | high | xhigh` (GPT-5.2 + Codex models only)
- Levels (aliases): `off | minimal | low | medium | high | xhigh | adaptive`
- minimal → “think”
- low → “think hard”
- medium → “think harder”
- high → “ultrathink” (max budget)
- xhigh → “ultrathink+” (GPT-5.2 + Codex models only)
- adaptive → provider-managed adaptive reasoning budget (supported for Anthropic Claude 4.6 model family)
- `x-high`, `x_high`, `extra-high`, `extra high`, and `extra_high` map to `xhigh`.
- `highest`, `max` map to `high`.
- Provider notes:
- Anthropic Claude 4.6 models default to `adaptive` when no explicit thinking level is set.
- Z.AI (`zai/*`) only supports binary thinking (`on`/`off`). Any non-`off` level is treated as `on` (mapped to `low`).
## Resolution order
@ -26,7 +28,7 @@ title: "Thinking Levels"
1. Inline directive on the message (applies only to that message).
2. Session override (set by sending a directive-only message).
3. Global default (`agents.defaults.thinkingDefault` in config).
4. Fallback: low for reasoning-capable models; off otherwise.
4. Fallback: `adaptive` for Anthropic Claude 4.6 models, `low` for other reasoning-capable models, `off` otherwise.
## Setting a session default