From 3aac43e30b7ffb2ad5b0887aaeeebc40ed6ec492 Mon Sep 17 00:00:00 2001 From: Vincent Koc Date: Fri, 27 Mar 2026 20:46:05 -0700 Subject: [PATCH] docs: remove stale MiniMax M2.5 refs and add image generation docs After the M2.7-only catalog trim (#54487), update 10 docs files: - Replace removed M2.5/VL-01 model references across FAQ, wizard, config reference, local-models, and provider pages - Make local-models guide model-agnostic (generic LM Studio placeholder) - Add image-01 generation section to minimax.md - Leave third-party catalogs (Synthetic, Venice) unchanged --- docs/concepts/model-providers.md | 10 +++---- docs/gateway/configuration-examples.md | 6 ++--- docs/gateway/configuration-reference.md | 4 +-- docs/gateway/local-models.md | 35 +++++++++++++------------ docs/help/faq.md | 2 +- docs/providers/minimax.md | 23 ++++++++++++++++ docs/providers/qwen.md | 2 +- docs/providers/qwen_modelstudio.md | 2 +- docs/reference/wizard.md | 2 +- docs/start/wizard-cli-reference.md | 2 +- 10 files changed, 56 insertions(+), 32 deletions(-) diff --git a/docs/concepts/model-providers.md b/docs/concepts/model-providers.md index a1987aa8977..674ed6281e4 100644 --- a/docs/concepts/model-providers.md +++ b/docs/concepts/model-providers.md @@ -247,7 +247,7 @@ OpenClaw ships with the pi‑ai catalog. These providers require **no** - Example model: `kilocode/anthropic/claude-opus-4.6` - CLI: `openclaw onboard --kilocode-api-key ` - Base URL: `https://api.kilo.ai/api/gateway/` -- Expanded built-in catalog includes GLM-5 Free, MiniMax M2.5 Free, GPT-5.2, Gemini 3 Pro Preview, Gemini 3 Flash Preview, Grok Code Fast 1, and Kimi K2.5. +- Expanded built-in catalog includes GLM-5 Free, MiniMax M2.7 Free, GPT-5.2, Gemini 3 Pro Preview, Gemini 3 Flash Preview, Grok Code Fast 1, and Kimi K2.5. See [/providers/kilocode](/providers/kilocode) for setup details. @@ -538,8 +538,8 @@ Example (OpenAI‑compatible): { agents: { defaults: { - model: { primary: "lmstudio/minimax-m2.5-gs32" }, - models: { "lmstudio/minimax-m2.5-gs32": { alias: "Minimax" } }, + model: { primary: "lmstudio/my-local-model" }, + models: { "lmstudio/my-local-model": { alias: "Local" } }, }, }, models: { @@ -550,8 +550,8 @@ Example (OpenAI‑compatible): api: "openai-completions", models: [ { - id: "minimax-m2.5-gs32", - name: "MiniMax M2.5", + id: "my-local-model", + name: "Local Model", reasoning: false, input: ["text"], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, diff --git a/docs/gateway/configuration-examples.md b/docs/gateway/configuration-examples.md index f279cb23e57..8f264571067 100644 --- a/docs/gateway/configuration-examples.md +++ b/docs/gateway/configuration-examples.md @@ -617,7 +617,7 @@ terms before depending on subscription auth. { agent: { workspace: "~/.openclaw/workspace", - model: { primary: "lmstudio/minimax-m2.5-gs32" }, + model: { primary: "lmstudio/my-local-model" }, }, models: { mode: "merge", @@ -628,8 +628,8 @@ terms before depending on subscription auth. api: "openai-responses", models: [ { - id: "minimax-m2.5-gs32", - name: "MiniMax M2.5 GS32", + id: "my-local-model", + name: "Local Model", reasoning: false, input: ["text"], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, diff --git a/docs/gateway/configuration-reference.md b/docs/gateway/configuration-reference.md index 8ac0d81b446..96a547888e9 100644 --- a/docs/gateway/configuration-reference.md +++ b/docs/gateway/configuration-reference.md @@ -2356,13 +2356,13 @@ Base URL should omit `/v1` (Anthropic client appends it). Shortcut: `openclaw on ``` Set `MINIMAX_API_KEY`. Shortcut: `openclaw onboard --auth-choice minimax-api`. -`MiniMax-M2.5` and `MiniMax-M2.5-highspeed` remain available if you prefer the older text models. +The model catalog now defaults to M2.7 only. -See [Local Models](/gateway/local-models). TL;DR: run MiniMax M2.5 via LM Studio Responses API on serious hardware; keep hosted models merged for fallback. +See [Local Models](/gateway/local-models). TL;DR: run a large local model via LM Studio Responses API on serious hardware; keep hosted models merged for fallback. diff --git a/docs/gateway/local-models.md b/docs/gateway/local-models.md index 1bb9dac5b91..7b0c59e7e92 100644 --- a/docs/gateway/local-models.md +++ b/docs/gateway/local-models.md @@ -13,34 +13,34 @@ Local is doable, but OpenClaw expects large context + strong defenses against pr If you want the lowest-friction local setup, start with [Ollama](/providers/ollama) and `openclaw onboard`. This page is the opinionated guide for higher-end local stacks and custom OpenAI-compatible local servers. -## Recommended: LM Studio + MiniMax M2.5 (Responses API, full-size) +## Recommended: LM Studio + large local model (Responses API) -Best current local stack. Load MiniMax M2.5 in LM Studio, enable the local server (default `http://127.0.0.1:1234`), and use Responses API to keep reasoning separate from final text. +Best current local stack. Load a large model in LM Studio (for example, a full-size Qwen, DeepSeek, or Llama build), enable the local server (default `http://127.0.0.1:1234`), and use Responses API to keep reasoning separate from final text. ```json5 { agents: { defaults: { - model: { primary: "lmstudio/minimax-m2.5-gs32" }, + model: { primary: “lmstudio/my-local-model” }, models: { - "anthropic/claude-opus-4-6": { alias: "Opus" }, - "lmstudio/minimax-m2.5-gs32": { alias: "Minimax" }, + “anthropic/claude-opus-4-6”: { alias: “Opus” }, + “lmstudio/my-local-model”: { alias: “Local” }, }, }, }, models: { - mode: "merge", + mode: “merge”, providers: { lmstudio: { - baseUrl: "http://127.0.0.1:1234/v1", - apiKey: "lmstudio", - api: "openai-responses", + baseUrl: “http://127.0.0.1:1234/v1”, + apiKey: “lmstudio”, + api: “openai-responses”, models: [ { - id: "minimax-m2.5-gs32", - name: "MiniMax M2.5 GS32", + id: “my-local-model”, + name: “Local Model”, reasoning: false, - input: ["text"], + input: [“text”], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, contextWindow: 196608, maxTokens: 8192, @@ -55,7 +55,8 @@ Best current local stack. Load MiniMax M2.5 in LM Studio, enable the local serve **Setup checklist** - Install LM Studio: [https://lmstudio.ai](https://lmstudio.ai) -- In LM Studio, download the **largest MiniMax M2.5 build available** (avoid “small”/heavily quantized variants), start the server, confirm `http://127.0.0.1:1234/v1/models` lists it. +- In LM Studio, download the **largest model build available** (avoid “small”/heavily quantized variants), start the server, confirm `http://127.0.0.1:1234/v1/models` lists it. +- Replace `my-local-model` with the actual model ID shown in LM Studio. - Keep the model loaded; cold-load adds startup latency. - Adjust `contextWindow`/`maxTokens` if your LM Studio build differs. - For WhatsApp, stick to Responses API so only final text is sent. @@ -70,11 +71,11 @@ Keep hosted models configured even when running local; use `models.mode: "merge" defaults: { model: { primary: "anthropic/claude-sonnet-4-6", - fallbacks: ["lmstudio/minimax-m2.5-gs32", "anthropic/claude-opus-4-6"], + fallbacks: ["lmstudio/my-local-model", "anthropic/claude-opus-4-6"], }, models: { "anthropic/claude-sonnet-4-6": { alias: "Sonnet" }, - "lmstudio/minimax-m2.5-gs32": { alias: "MiniMax Local" }, + "lmstudio/my-local-model": { alias: "Local" }, "anthropic/claude-opus-4-6": { alias: "Opus" }, }, }, @@ -88,8 +89,8 @@ Keep hosted models configured even when running local; use `models.mode: "merge" api: "openai-responses", models: [ { - id: "minimax-m2.5-gs32", - name: "MiniMax M2.5 GS32", + id: "my-local-model", + name: "Local Model", reasoning: false, input: ["text"], cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, diff --git a/docs/help/faq.md b/docs/help/faq.md index 90b02e5df84..1d7c164efaa 100644 --- a/docs/help/faq.md +++ b/docs/help/faq.md @@ -633,7 +633,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS, - Usually no. OpenClaw needs large context + strong safety; small cards truncate and leak. If you must, run the **largest** MiniMax M2.5 build you can locally (LM Studio) and see [/gateway/local-models](/gateway/local-models). Smaller/quantized models increase prompt-injection risk - see [Security](/gateway/security). + Usually no. OpenClaw needs large context + strong safety; small cards truncate and leak. If you must, run the **largest** model build you can locally (LM Studio) and see [/gateway/local-models](/gateway/local-models). Smaller/quantized models increase prompt-injection risk - see [Security](/gateway/security). diff --git a/docs/providers/minimax.md b/docs/providers/minimax.md index ad1e99c2b2c..eb18ec032dc 100644 --- a/docs/providers/minimax.md +++ b/docs/providers/minimax.md @@ -14,6 +14,29 @@ OpenClaw's MiniMax provider defaults to **MiniMax M2.7**. - `MiniMax-M2.7`: default hosted text model. - `MiniMax-M2.7-highspeed`: faster M2.7 text tier. +- `image-01`: image generation model (generate and image-to-image editing). + +## Image generation + +The MiniMax plugin registers the `image-01` model for the `image_generate` tool. It supports: + +- **Text-to-image generation** with aspect ratio control. +- **Image-to-image editing** (subject reference) with aspect ratio control. +- Supported aspect ratios: `1:1`, `16:9`, `4:3`, `3:2`, `2:3`, `3:4`, `9:16`, `21:9`. + +To use MiniMax for image generation, set it as the image generation provider: + +```json5 +{ + agents: { + defaults: { + imageGenerationModel: { primary: "minimax/image-01" }, + }, + }, +} +``` + +The plugin uses the same `MINIMAX_API_KEY` or OAuth auth as the text models. No additional configuration is needed if MiniMax is already set up. ## Choose a setup diff --git a/docs/providers/qwen.md b/docs/providers/qwen.md index 9a758e2db96..3a969a54e44 100644 --- a/docs/providers/qwen.md +++ b/docs/providers/qwen.md @@ -20,7 +20,7 @@ background. ## Recommended: Model Studio (Alibaba Cloud Coding Plan) Use [Model Studio](/providers/modelstudio) for officially supported access to -Qwen models (Qwen 3.5 Plus, GLM-4.7, Kimi K2.5, MiniMax M2.5, and more). +Qwen models (Qwen 3.5 Plus, GLM-4.7, Kimi K2.5, and more). ```bash # Global endpoint diff --git a/docs/providers/qwen_modelstudio.md b/docs/providers/qwen_modelstudio.md index df400536284..c394299a541 100644 --- a/docs/providers/qwen_modelstudio.md +++ b/docs/providers/qwen_modelstudio.md @@ -74,7 +74,7 @@ override with a custom `baseUrl` in config. - **qwen3-coder-plus**, **qwen3-coder-next** — Qwen coding models - **GLM-5** — GLM models via Alibaba - **Kimi K2.5** — Moonshot AI via Alibaba -- **MiniMax-M2.5** — MiniMax via Alibaba +- **MiniMax-M2.7** — MiniMax via Alibaba Some models (qwen3.5-plus, kimi-k2.5) support image input. Context windows range from 200K to 1M tokens. diff --git a/docs/reference/wizard.md b/docs/reference/wizard.md index 12a746471df..56958324cdf 100644 --- a/docs/reference/wizard.md +++ b/docs/reference/wizard.md @@ -46,7 +46,7 @@ For a high-level overview, see [Onboarding (CLI)](/start/wizard). - More detail: [Vercel AI Gateway](/providers/vercel-ai-gateway) - **Cloudflare AI Gateway**: prompts for Account ID, Gateway ID, and `CLOUDFLARE_AI_GATEWAY_API_KEY`. - More detail: [Cloudflare AI Gateway](/providers/cloudflare-ai-gateway) - - **MiniMax**: config is auto-written; hosted default is `MiniMax-M2.7` and `MiniMax-M2.5` stays available. + - **MiniMax**: config is auto-written; hosted default is `MiniMax-M2.7`. - More detail: [MiniMax](/providers/minimax) - **Synthetic (Anthropic-compatible)**: prompts for `SYNTHETIC_API_KEY`. - More detail: [Synthetic](/providers/synthetic) diff --git a/docs/start/wizard-cli-reference.md b/docs/start/wizard-cli-reference.md index 3caf3b221f2..0c52146019a 100644 --- a/docs/start/wizard-cli-reference.md +++ b/docs/start/wizard-cli-reference.md @@ -174,7 +174,7 @@ What you set: More detail: [Cloudflare AI Gateway](/providers/cloudflare-ai-gateway). - Config is auto-written. Hosted default is `MiniMax-M2.7`; `MiniMax-M2.5` stays available. + Config is auto-written. Hosted default is `MiniMax-M2.7`. More detail: [MiniMax](/providers/minimax).