docs: refresh minimax and kilocode refs

This commit is contained in:
Peter Steinberger 2026-04-04 09:45:18 +01:00
parent 323415204e
commit b5608397d0
No known key found for this signature in database
3 changed files with 26 additions and 10 deletions

View File

@ -293,6 +293,8 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Static fallback catalog ships `kilocode/kilo/auto`; live
`https://api.kilo.ai/api/gateway/models` discovery can expand the runtime
catalog further.
- Exact upstream routing behind `kilocode/kilo/auto` is owned by Kilo Gateway,
not hard-coded in OpenClaw.
See [/providers/kilocode](/providers/kilocode) for setup details.
@ -309,6 +311,9 @@ See [/providers/kilocode](/providers/kilocode) for setup details.
- Example model: `kilocode/kilo/auto`
- MiniMax: `minimax` (`MINIMAX_API_KEY`)
- Example model: `minimax/MiniMax-M2.7`
- MiniMax onboarding/API-key setup writes explicit M2.7 model definitions with
`input: ["text", "image"]`; the bundled provider catalog keeps the chat refs
text-only until that provider config is materialized
- Moonshot: `moonshot` (`MOONSHOT_API_KEY`)
- Example model: `moonshot/kimi-k2.5`
- Kimi Coding: `kimi` (`KIMI_API_KEY` or `KIMICODE_API_KEY`)

View File

@ -44,11 +44,11 @@ export KILOCODE_API_KEY="<your-kilocode-api-key>" # pragma: allowlist secret
## Default model
The default model is `kilocode/kilo/auto`, a smart routing model that automatically selects
the best underlying model based on the task:
The default model is `kilocode/kilo/auto`, a provider-owned smart-routing
model managed by Kilo Gateway.
- Planning, debugging, and orchestration tasks route to Claude Opus
- Code writing and exploration tasks route to Claude Sonnet
OpenClaw treats `kilocode/kilo/auto` as the stable default ref, but does not
publish a source-backed task-to-upstream-model mapping for that route.
## Available models
@ -75,6 +75,8 @@ kilocode/google/gemini-3-pro-preview
and `maxTokens: 128000`
- At startup, OpenClaw tries `GET https://api.kilo.ai/api/gateway/models` and
merges discovered models ahead of the static fallback catalog
- Exact upstream routing behind `kilocode/kilo/auto` is owned by Kilo Gateway,
not hard-coded in OpenClaw
- Kilo Gateway is documented in source as OpenRouter-compatible, so it stays on
the proxy-style OpenAI-compatible path rather than native OpenAI request shaping
- For more model/provider options, see [/concepts/model-providers](/concepts/model-providers).

View File

@ -12,8 +12,8 @@ OpenClaw's MiniMax provider defaults to **MiniMax M2.7**.
## Model lineup
- `MiniMax-M2.7`: default hosted multimodal model (text + image input).
- `MiniMax-M2.7-highspeed`: faster M2.7 multimodal tier (text + image input).
- `MiniMax-M2.7`: default hosted reasoning model.
- `MiniMax-M2.7-highspeed`: faster M2.7 reasoning tier.
- `image-01`: image generation model (generate and image-to-image editing).
## Image generation
@ -38,7 +38,12 @@ To use MiniMax for image generation, set it as the image generation provider:
The plugin uses the same `MINIMAX_API_KEY` or OAuth auth as the text models. No additional configuration is needed if MiniMax is already set up.
For chat/inference models, both `MiniMax-M2.7` and `MiniMax-M2.7-highspeed` accept image input in addition to text.
When onboarding or API-key setup writes explicit `models.providers.minimax`
entries, OpenClaw materializes `MiniMax-M2.7` and
`MiniMax-M2.7-highspeed` with `input: ["text", "image"]`.
The bundled MiniMax provider catalog itself currently advertises those chat
refs as text-only metadata until explicit provider config is materialized.
## Choose a setup
@ -97,7 +102,7 @@ Configure via CLI:
name: "MiniMax M2.7 Highspeed",
reasoning: true,
input: ["text", "image"],
cost: { input: 0.3, output: 1.2, cacheRead: 0.06, cacheWrite: 0.375 },
cost: { input: 0.6, output: 2.4, cacheRead: 0.06, cacheWrite: 0.375 },
contextWindow: 204800,
maxTokens: 131072,
},
@ -152,8 +157,12 @@ Use the interactive config wizard to set MiniMax without editing JSON:
## Notes
- Model refs are `minimax/<model>`.
- Default chat model: `MiniMax-M2.7` (text + image input).
- Alternate chat model: `MiniMax-M2.7-highspeed` (text + image input).
- Default chat model: `MiniMax-M2.7`
- Alternate chat model: `MiniMax-M2.7-highspeed`
- Onboarding and direct API-key setup write explicit model definitions with
`input: ["text", "image"]` for both M2.7 variants
- The bundled provider catalog currently exposes the chat refs as text-only
metadata until explicit MiniMax provider config exists
- Coding Plan usage API: `https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains` (requires a coding plan key).
- Update pricing values in `models.json` if you need exact cost tracking.
- Referral link for MiniMax Coding Plan (10% off): [https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link](https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link)