mirror of https://github.com/openclaw/openclaw.git
docs: refresh openai compatible proxy guides
This commit is contained in:
parent
69980e8bf4
commit
edc470f6b0
|
|
@ -628,6 +628,10 @@ Notes:
|
|||
- `maxTokens: 8192`
|
||||
- Recommended: set explicit values that match your proxy/model limits.
|
||||
- For `api: "openai-completions"` on non-native endpoints (any non-empty `baseUrl` whose host is not `api.openai.com`), OpenClaw forces `compat.supportsDeveloperRole: false` to avoid provider 400 errors for unsupported `developer` roles.
|
||||
- Proxy-style OpenAI-compatible routes also skip native OpenAI-only request
|
||||
shaping: no `service_tier`, no Responses `store`, no prompt-cache hints, no
|
||||
OpenAI reasoning-compat payload shaping, and no hidden OpenClaw attribution
|
||||
headers.
|
||||
- If `baseUrl` is empty/omitted, OpenClaw keeps the default OpenAI behavior (which resolves to `api.openai.com`).
|
||||
- For safety, an explicit `compat.supportsDeveloperRole: true` is still overridden on non-native `openai-completions` endpoints.
|
||||
|
||||
|
|
|
|||
|
|
@ -94,6 +94,15 @@ You can point OpenClaw at the proxy as a custom OpenAI-compatible endpoint:
|
|||
}
|
||||
```
|
||||
|
||||
This path uses the same proxy-style OpenAI-compatible route as other custom
|
||||
`/v1` backends:
|
||||
|
||||
- native OpenAI-only request shaping does not apply
|
||||
- no `service_tier`, no Responses `store`, no prompt-cache hints, and no
|
||||
OpenAI reasoning-compat payload shaping
|
||||
- hidden OpenClaw attribution headers (`originator`, `version`, `User-Agent`)
|
||||
are not injected on the proxy URL
|
||||
|
||||
## Available Models
|
||||
|
||||
| Model ID | Maps To |
|
||||
|
|
|
|||
|
|
@ -145,8 +145,13 @@ curl "http://localhost:4000/spend/logs" \
|
|||
## Notes
|
||||
|
||||
- LiteLLM runs on `http://localhost:4000` by default
|
||||
- OpenClaw connects via the OpenAI-compatible `/v1/chat/completions` endpoint
|
||||
- All OpenClaw features work through LiteLLM — no limitations
|
||||
- OpenClaw connects through LiteLLM's proxy-style OpenAI-compatible `/v1`
|
||||
endpoint
|
||||
- Native OpenAI-only request shaping does not apply through LiteLLM:
|
||||
no `service_tier`, no Responses `store`, no prompt-cache hints, and no
|
||||
OpenAI reasoning-compat payload shaping
|
||||
- Hidden OpenClaw attribution headers (`originator`, `version`, `User-Agent`)
|
||||
are not injected on custom LiteLLM base URLs
|
||||
|
||||
## See also
|
||||
|
||||
|
|
|
|||
|
|
@ -102,3 +102,14 @@ curl http://127.0.0.1:30000/v1/models
|
|||
- If requests fail with auth errors, set a real `SGLANG_API_KEY` that matches
|
||||
your server configuration, or configure the provider explicitly under
|
||||
`models.providers.sglang`.
|
||||
|
||||
## Proxy-style behavior
|
||||
|
||||
SGLang is treated as a proxy-style OpenAI-compatible `/v1` backend, not a
|
||||
native OpenAI endpoint.
|
||||
|
||||
- native OpenAI-only request shaping does not apply here
|
||||
- no `service_tier`, no Responses `store`, no prompt-cache hints, and no
|
||||
OpenAI reasoning-compat payload shaping
|
||||
- hidden OpenClaw attribution headers (`originator`, `version`, `User-Agent`)
|
||||
are not injected on custom SGLang base URLs
|
||||
|
|
|
|||
|
|
@ -90,3 +90,14 @@ curl http://127.0.0.1:8000/v1/models
|
|||
```
|
||||
|
||||
- If requests fail with auth errors, set a real `VLLM_API_KEY` that matches your server configuration, or configure the provider explicitly under `models.providers.vllm`.
|
||||
|
||||
## Proxy-style behavior
|
||||
|
||||
vLLM is treated as a proxy-style OpenAI-compatible `/v1` backend, not a native
|
||||
OpenAI endpoint.
|
||||
|
||||
- native OpenAI-only request shaping does not apply here
|
||||
- no `service_tier`, no Responses `store`, no prompt-cache hints, and no
|
||||
OpenAI reasoning-compat payload shaping
|
||||
- hidden OpenClaw attribution headers (`originator`, `version`, `User-Agent`)
|
||||
are not injected on custom vLLM base URLs
|
||||
|
|
|
|||
Loading…
Reference in New Issue