From b5608397d0c79566d4f643b754a0c3cbf7f03649 Mon Sep 17 00:00:00 2001 From: Peter Steinberger Date: Sat, 4 Apr 2026 09:45:18 +0100 Subject: [PATCH] docs: refresh minimax and kilocode refs --- docs/concepts/model-providers.md | 5 +++++ docs/providers/kilocode.md | 10 ++++++---- docs/providers/minimax.md | 21 +++++++++++++++------ 3 files changed, 26 insertions(+), 10 deletions(-) diff --git a/docs/concepts/model-providers.md b/docs/concepts/model-providers.md index 48b219f2562..855db10ddf8 100644 --- a/docs/concepts/model-providers.md +++ b/docs/concepts/model-providers.md @@ -293,6 +293,8 @@ OpenClaw ships with the pi‑ai catalog. These providers require **no** - Static fallback catalog ships `kilocode/kilo/auto`; live `https://api.kilo.ai/api/gateway/models` discovery can expand the runtime catalog further. +- Exact upstream routing behind `kilocode/kilo/auto` is owned by Kilo Gateway, + not hard-coded in OpenClaw. See [/providers/kilocode](/providers/kilocode) for setup details. @@ -309,6 +311,9 @@ See [/providers/kilocode](/providers/kilocode) for setup details. - Example model: `kilocode/kilo/auto` - MiniMax: `minimax` (`MINIMAX_API_KEY`) - Example model: `minimax/MiniMax-M2.7` +- MiniMax onboarding/API-key setup writes explicit M2.7 model definitions with + `input: ["text", "image"]`; the bundled provider catalog keeps the chat refs + text-only until that provider config is materialized - Moonshot: `moonshot` (`MOONSHOT_API_KEY`) - Example model: `moonshot/kimi-k2.5` - Kimi Coding: `kimi` (`KIMI_API_KEY` or `KIMICODE_API_KEY`) diff --git a/docs/providers/kilocode.md b/docs/providers/kilocode.md index b77dfce8f28..6f7f2b440a6 100644 --- a/docs/providers/kilocode.md +++ b/docs/providers/kilocode.md @@ -44,11 +44,11 @@ export KILOCODE_API_KEY="" # pragma: allowlist secret ## Default model -The default model is `kilocode/kilo/auto`, a smart routing model that automatically selects -the best underlying model based on the task: +The default model is `kilocode/kilo/auto`, a provider-owned smart-routing +model managed by Kilo Gateway. -- Planning, debugging, and orchestration tasks route to Claude Opus -- Code writing and exploration tasks route to Claude Sonnet +OpenClaw treats `kilocode/kilo/auto` as the stable default ref, but does not +publish a source-backed task-to-upstream-model mapping for that route. ## Available models @@ -75,6 +75,8 @@ kilocode/google/gemini-3-pro-preview and `maxTokens: 128000` - At startup, OpenClaw tries `GET https://api.kilo.ai/api/gateway/models` and merges discovered models ahead of the static fallback catalog +- Exact upstream routing behind `kilocode/kilo/auto` is owned by Kilo Gateway, + not hard-coded in OpenClaw - Kilo Gateway is documented in source as OpenRouter-compatible, so it stays on the proxy-style OpenAI-compatible path rather than native OpenAI request shaping - For more model/provider options, see [/concepts/model-providers](/concepts/model-providers). diff --git a/docs/providers/minimax.md b/docs/providers/minimax.md index 60fedb33c8a..2e3f5c5695a 100644 --- a/docs/providers/minimax.md +++ b/docs/providers/minimax.md @@ -12,8 +12,8 @@ OpenClaw's MiniMax provider defaults to **MiniMax M2.7**. ## Model lineup -- `MiniMax-M2.7`: default hosted multimodal model (text + image input). -- `MiniMax-M2.7-highspeed`: faster M2.7 multimodal tier (text + image input). +- `MiniMax-M2.7`: default hosted reasoning model. +- `MiniMax-M2.7-highspeed`: faster M2.7 reasoning tier. - `image-01`: image generation model (generate and image-to-image editing). ## Image generation @@ -38,7 +38,12 @@ To use MiniMax for image generation, set it as the image generation provider: The plugin uses the same `MINIMAX_API_KEY` or OAuth auth as the text models. No additional configuration is needed if MiniMax is already set up. -For chat/inference models, both `MiniMax-M2.7` and `MiniMax-M2.7-highspeed` accept image input in addition to text. +When onboarding or API-key setup writes explicit `models.providers.minimax` +entries, OpenClaw materializes `MiniMax-M2.7` and +`MiniMax-M2.7-highspeed` with `input: ["text", "image"]`. + +The bundled MiniMax provider catalog itself currently advertises those chat +refs as text-only metadata until explicit provider config is materialized. ## Choose a setup @@ -97,7 +102,7 @@ Configure via CLI: name: "MiniMax M2.7 Highspeed", reasoning: true, input: ["text", "image"], - cost: { input: 0.3, output: 1.2, cacheRead: 0.06, cacheWrite: 0.375 }, + cost: { input: 0.6, output: 2.4, cacheRead: 0.06, cacheWrite: 0.375 }, contextWindow: 204800, maxTokens: 131072, }, @@ -152,8 +157,12 @@ Use the interactive config wizard to set MiniMax without editing JSON: ## Notes - Model refs are `minimax/`. -- Default chat model: `MiniMax-M2.7` (text + image input). -- Alternate chat model: `MiniMax-M2.7-highspeed` (text + image input). +- Default chat model: `MiniMax-M2.7` +- Alternate chat model: `MiniMax-M2.7-highspeed` +- Onboarding and direct API-key setup write explicit model definitions with + `input: ["text", "image"]` for both M2.7 variants +- The bundled provider catalog currently exposes the chat refs as text-only + metadata until explicit MiniMax provider config exists - Coding Plan usage API: `https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains` (requires a coding plan key). - Update pricing values in `models.json` if you need exact cost tracking. - Referral link for MiniMax Coding Plan (10% off): [https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link](https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link)