mirror of https://github.com/openclaw/openclaw.git
docs: expand static provider catalogs
This commit is contained in:
parent
9e0cf17d0c
commit
6b100e4dcf
|
|
@ -12,6 +12,7 @@ read_when:
|
|||
- Provider: `deepseek`
|
||||
- Auth: `DEEPSEEK_API_KEY`
|
||||
- API: OpenAI-compatible
|
||||
- Base URL: `https://api.deepseek.com`
|
||||
|
||||
## Quick start
|
||||
|
||||
|
|
@ -40,14 +41,13 @@ If the Gateway runs as a daemon (launchd/systemd), make sure `DEEPSEEK_API_KEY`
|
|||
is available to that process (for example, in `~/.openclaw/.env` or via
|
||||
`env.shellEnv`).
|
||||
|
||||
## Available models
|
||||
## Built-in catalog
|
||||
|
||||
| Model ID | Name | Type | Context |
|
||||
| ------------------- | ------------------------ | --------- | ------- |
|
||||
| `deepseek-chat` | DeepSeek Chat (V3.2) | General | 128K |
|
||||
| `deepseek-reasoner` | DeepSeek Reasoner (V3.2) | Reasoning | 128K |
|
||||
| Model ref | Name | Input | Context | Max output | Notes |
|
||||
| ---------------------------- | ----------------- | ----- | ------- | ---------- | ------------------------------------------------- |
|
||||
| `deepseek/deepseek-chat` | DeepSeek Chat | text | 131,072 | 8,192 | Default model; DeepSeek V3.2 non-thinking surface |
|
||||
| `deepseek/deepseek-reasoner` | DeepSeek Reasoner | text | 131,072 | 65,536 | Reasoning-enabled V3.2 surface |
|
||||
|
||||
- **deepseek-chat** corresponds to DeepSeek-V3.2 in non-thinking mode.
|
||||
- **deepseek-reasoner** corresponds to DeepSeek-V3.2 in thinking mode with chain-of-thought reasoning.
|
||||
Both bundled models currently advertise streaming usage compatibility in source.
|
||||
|
||||
Get your API key at [platform.deepseek.com](https://platform.deepseek.com/api_keys).
|
||||
|
|
|
|||
|
|
@ -29,6 +29,20 @@ openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"
|
|||
}
|
||||
```
|
||||
|
||||
## Built-in LLM catalog
|
||||
|
||||
OpenClaw currently ships this bundled Mistral catalog:
|
||||
|
||||
| Model ref | Input | Context | Max output | Notes |
|
||||
| -------------------------------- | ----------- | ------- | ---------- | ------------------------ |
|
||||
| `mistral/mistral-large-latest` | text, image | 262,144 | 16,384 | Default model |
|
||||
| `mistral/mistral-medium-2508` | text, image | 262,144 | 8,192 | Mistral Medium 3.1 |
|
||||
| `mistral/mistral-small-latest` | text, image | 128,000 | 16,384 | Smaller multimodal model |
|
||||
| `mistral/pixtral-large-latest` | text, image | 128,000 | 32,768 | Pixtral |
|
||||
| `mistral/codestral-latest` | text | 256,000 | 4,096 | Coding |
|
||||
| `mistral/devstral-medium-latest` | text | 262,144 | 32,768 | Devstral 2 |
|
||||
| `mistral/magistral-small` | text | 128,000 | 40,000 | Reasoning-enabled |
|
||||
|
||||
## Config snippet (audio transcription with Voxtral)
|
||||
|
||||
```json5
|
||||
|
|
|
|||
|
|
@ -45,11 +45,14 @@ If you still pass `--token`, remember it lands in shell history and `ps` output;
|
|||
|
||||
## Model IDs
|
||||
|
||||
- `nvidia/llama-3.1-nemotron-70b-instruct` (default)
|
||||
- `meta/llama-3.3-70b-instruct`
|
||||
- `nvidia/mistral-nemo-minitron-8b-8k-instruct`
|
||||
| Model ref | Name | Context | Max output |
|
||||
| ---------------------------------------------------- | ---------------------------------------- | ------- | ---------- |
|
||||
| `nvidia/nvidia/llama-3.1-nemotron-70b-instruct` | NVIDIA Llama 3.1 Nemotron 70B Instruct | 131,072 | 4,096 |
|
||||
| `nvidia/meta/llama-3.3-70b-instruct` | Meta Llama 3.3 70B Instruct | 131,072 | 4,096 |
|
||||
| `nvidia/nvidia/mistral-nemo-minitron-8b-8k-instruct` | NVIDIA Mistral NeMo Minitron 8B Instruct | 8,192 | 2,048 |
|
||||
|
||||
## Notes
|
||||
|
||||
- OpenAI-compatible `/v1` endpoint; use an API key from NVIDIA NGC.
|
||||
- Provider auto-enables when `NVIDIA_API_KEY` is set; uses static defaults (131,072-token context window, 4,096 max tokens).
|
||||
- Provider auto-enables when `NVIDIA_API_KEY` is set.
|
||||
- The bundled catalog is static; costs default to `0` in source.
|
||||
|
|
|
|||
|
|
@ -57,6 +57,21 @@ openclaw onboard --auth-choice stepfun-plan-api-key-intl --stepfun-api-key "$STE
|
|||
- Step Plan default model: `stepfun-plan/step-3.5-flash`
|
||||
- Step Plan alternate model: `stepfun-plan/step-3.5-flash-2603`
|
||||
|
||||
## Built-in catalogs
|
||||
|
||||
Standard (`stepfun`):
|
||||
|
||||
| Model ref | Context | Max output | Notes |
|
||||
| ------------------------ | ------- | ---------- | ---------------------- |
|
||||
| `stepfun/step-3.5-flash` | 262,144 | 65,536 | Default standard model |
|
||||
|
||||
Step Plan (`stepfun-plan`):
|
||||
|
||||
| Model ref | Context | Max output | Notes |
|
||||
| ---------------------------------- | ------- | ---------- | -------------------------- |
|
||||
| `stepfun-plan/step-3.5-flash` | 262,144 | 65,536 | Default Step Plan model |
|
||||
| `stepfun-plan/step-3.5-flash-2603` | 262,144 | 65,536 | Additional Step Plan model |
|
||||
|
||||
## Config snippets
|
||||
|
||||
Standard provider:
|
||||
|
|
|
|||
|
|
@ -13,15 +13,18 @@ OpenAI-compatible endpoint with API-key authentication. Create your API key in t
|
|||
[Xiaomi MiMo console](https://platform.xiaomimimo.com/#/console/api-keys), then configure the
|
||||
bundled `xiaomi` provider with that key.
|
||||
|
||||
## Model overview
|
||||
## Built-in catalog
|
||||
|
||||
- **mimo-v2-flash**: default text model, 262144-token context window
|
||||
- **mimo-v2-pro**: reasoning text model, 1048576-token context window
|
||||
- **mimo-v2-omni**: reasoning multimodal model with text and image input, 262144-token context window
|
||||
- Base URL: `https://api.xiaomimimo.com/v1`
|
||||
- API: `openai-completions`
|
||||
- Authorization: `Bearer $XIAOMI_API_KEY`
|
||||
|
||||
| Model ref | Input | Context | Max output | Notes |
|
||||
| ---------------------- | ----------- | --------- | ---------- | ---------------------------- |
|
||||
| `xiaomi/mimo-v2-flash` | text | 262,144 | 8,192 | Default model |
|
||||
| `xiaomi/mimo-v2-pro` | text | 1,048,576 | 32,000 | Reasoning-enabled |
|
||||
| `xiaomi/mimo-v2-omni` | text, image | 262,144 | 32,000 | Reasoning-enabled multimodal |
|
||||
|
||||
## CLI setup
|
||||
|
||||
```bash
|
||||
|
|
|
|||
Loading…
Reference in New Issue