mirror of https://github.com/openclaw/openclaw.git
docs: add Bedrock inference profiles and Bedrock Mantle provider coverage, re-sort changelog
This commit is contained in:
parent
35b132c7eb
commit
fc9648b620
|
|
@ -23,6 +23,8 @@ Docs: https://docs.openclaw.ai
|
|||
- Plugins/install: add `openclaw plugins install --force` to overwrite existing plugin and hook-pack install targets without using the dangerous-code override flag. (#60544) Thanks @gumadeiras.
|
||||
- Plugins/onboarding: add plugin config TUI prompts to onboard and configure wizards so more plugin setup can stay in the guided flow. (#60590) Thanks @odysseus0.
|
||||
- Prompt caching: keep prompt prefixes more reusable across transport fallback, deterministic MCP tool ordering, compaction, and embedded image history so follow-up turns hit cache more reliably. (#58036, #58037, #58038, #59054, #60603, #60691) Thanks @bcherny.
|
||||
- Providers/Amazon Bedrock: discover regional and global inference profiles, inherit their backing model capabilities, and inject the Bedrock request region automatically so cross-region Claude profiles work without manual provider overrides. (#61299) Thanks @wirjo.
|
||||
- Providers/Amazon Bedrock Mantle: add a bundled OpenAI-compatible Mantle provider with bearer-token discovery, automatic OSS model catalog loading, and Bedrock Mantle region detection for hosted GPT-OSS, Qwen, Kimi, GLM, and similar routes. (#61296) Thanks @wirjo.
|
||||
- Providers/Anthropic: remove setup-token from new onboarding and auth-command setup paths, keep existing configured legacy token profiles runnable, and steer new Anthropic setup to Claude CLI or API keys.
|
||||
- Providers/Fireworks: add a bundled Fireworks AI provider plugin with `FIREWORKS_API_KEY` onboarding, Fire Pass Kimi defaults, and dynamic Fireworks model-id support.
|
||||
- Providers/Ollama: add a bundled Ollama Web Search provider for key-free `web_search` via your configured Ollama host and `ollama signin`. (#59318) Thanks @BruceMacD.
|
||||
|
|
@ -30,8 +32,6 @@ Docs: https://docs.openclaw.ai
|
|||
- Providers/request overrides: add shared model and media request transport overrides across OpenAI-, Anthropic-, Google-, and compatible provider paths, including headers, auth, proxy, and TLS controls. (#60200)
|
||||
- Providers/StepFun: add the bundled StepFun provider plugin with standard and Step Plan endpoints, China/global onboarding choices, `step-3.5-flash` on both catalogs, and `step-3.5-flash-2603` currently exposed on Step Plan. (#60032) Thanks @hengm3467.
|
||||
- Tools/web_search: add a bundled MiniMax Search provider backed by the Coding Plan search API, with region reuse from `MINIMAX_API_HOST` and plugin-owned credential config. (#54648) Thanks @fengmk2.
|
||||
- Providers/Amazon Bedrock: discover regional and global inference profiles, inherit their backing model capabilities, and inject the Bedrock request region automatically so cross-region Claude profiles work without manual provider overrides. (#61299) Thanks @wirjo.
|
||||
- Providers/Amazon Bedrock Mantle: add a bundled OpenAI-compatible Mantle provider with bearer-token discovery, automatic OSS model catalog loading, and Bedrock Mantle region detection for hosted GPT-OSS, Qwen, Kimi, GLM, and similar routes. (#61296) Thanks @wirjo.
|
||||
|
||||
### Fixes
|
||||
|
||||
|
|
|
|||
|
|
@ -1235,6 +1235,7 @@
|
|||
"pages": [
|
||||
"providers/anthropic",
|
||||
"providers/bedrock",
|
||||
"providers/bedrock-mantle",
|
||||
"providers/chutes",
|
||||
"providers/claude-max-api-proxy",
|
||||
"providers/cloudflare-ai-gateway",
|
||||
|
|
|
|||
|
|
@ -0,0 +1,91 @@
|
|||
---
|
||||
summary: "Use Amazon Bedrock Mantle (OpenAI-compatible) models with OpenClaw"
|
||||
read_when:
|
||||
- You want to use Bedrock Mantle hosted OSS models with OpenClaw
|
||||
- You need the Mantle OpenAI-compatible endpoint for GPT-OSS, Qwen, Kimi, or GLM
|
||||
title: "Amazon Bedrock Mantle"
|
||||
---
|
||||
|
||||
# Amazon Bedrock Mantle
|
||||
|
||||
OpenClaw includes a bundled **Amazon Bedrock Mantle** provider that connects to
|
||||
the Mantle OpenAI-compatible endpoint. Mantle hosts open-source and
|
||||
third-party models (GPT-OSS, Qwen, Kimi, GLM, and similar) through a standard
|
||||
`/v1/chat/completions` surface backed by Bedrock infrastructure.
|
||||
|
||||
## What OpenClaw supports
|
||||
|
||||
- Provider: `amazon-bedrock-mantle`
|
||||
- API: `openai-completions` (OpenAI-compatible)
|
||||
- Auth: bearer token via `AWS_BEARER_TOKEN_BEDROCK`
|
||||
- Region: `AWS_REGION` or `AWS_DEFAULT_REGION` (default: `us-east-1`)
|
||||
|
||||
## Automatic model discovery
|
||||
|
||||
When `AWS_BEARER_TOKEN_BEDROCK` is set, OpenClaw automatically discovers
|
||||
available Mantle models by querying the region's `/v1/models` endpoint.
|
||||
Discovery results are cached for 1 hour.
|
||||
|
||||
Supported regions: `us-east-1`, `us-east-2`, `us-west-2`, `ap-northeast-1`,
|
||||
`ap-south-1`, `ap-southeast-3`, `eu-central-1`, `eu-west-1`, `eu-west-2`,
|
||||
`eu-south-1`, `eu-north-1`, `sa-east-1`.
|
||||
|
||||
## Onboarding
|
||||
|
||||
1. Set the bearer token on the **gateway host**:
|
||||
|
||||
```bash
|
||||
export AWS_BEARER_TOKEN_BEDROCK="..."
|
||||
# Optional (defaults to us-east-1):
|
||||
export AWS_REGION="us-west-2"
|
||||
```
|
||||
|
||||
2. Verify models are discovered:
|
||||
|
||||
```bash
|
||||
openclaw models list
|
||||
```
|
||||
|
||||
Discovered models appear under the `amazon-bedrock-mantle` provider. No
|
||||
additional config is required unless you want to override defaults.
|
||||
|
||||
## Manual configuration
|
||||
|
||||
If you prefer explicit config instead of auto-discovery:
|
||||
|
||||
```json5
|
||||
{
|
||||
models: {
|
||||
providers: {
|
||||
"amazon-bedrock-mantle": {
|
||||
baseUrl: "https://bedrock-mantle.us-east-1.api.aws/v1",
|
||||
api: "openai-completions",
|
||||
auth: "api-key",
|
||||
apiKey: "env:AWS_BEARER_TOKEN_BEDROCK",
|
||||
models: [
|
||||
{
|
||||
id: "gpt-oss-120b",
|
||||
name: "GPT-OSS 120B",
|
||||
reasoning: true,
|
||||
input: ["text"],
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
|
||||
contextWindow: 32000,
|
||||
maxTokens: 4096,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Mantle requires a bearer token today. Plain IAM credentials (instance roles,
|
||||
SSO, access keys) are not sufficient without a token.
|
||||
- The bearer token is the same `AWS_BEARER_TOKEN_BEDROCK` used by the standard
|
||||
[Amazon Bedrock](/providers/bedrock) provider.
|
||||
- Reasoning support is inferred from model IDs containing patterns like
|
||||
`thinking`, `reasoner`, or `gpt-oss-120b`.
|
||||
- If the Mantle endpoint is unavailable or returns no models, the provider is
|
||||
silently skipped.
|
||||
|
|
@ -22,8 +22,8 @@ not an API key.
|
|||
## Automatic model discovery
|
||||
|
||||
OpenClaw can automatically discover Bedrock models that support **streaming**
|
||||
and **text output**. Discovery uses `bedrock:ListFoundationModels` and is
|
||||
cached (default: 1 hour).
|
||||
and **text output**. Discovery uses `bedrock:ListFoundationModels` and
|
||||
`bedrock:ListInferenceProfiles`, and results are cached (default: 1 hour).
|
||||
|
||||
How the implicit provider is enabled:
|
||||
|
||||
|
|
@ -153,6 +153,7 @@ export AWS_REGION=us-east-1
|
|||
- `bedrock:InvokeModel`
|
||||
- `bedrock:InvokeModelWithResponseStream`
|
||||
- `bedrock:ListFoundationModels` (for automatic discovery)
|
||||
- `bedrock:ListInferenceProfiles` (for inference profile discovery)
|
||||
|
||||
Or attach the managed policy `AmazonBedrockFullAccess`.
|
||||
|
||||
|
|
@ -196,10 +197,29 @@ source ~/.bashrc
|
|||
openclaw models list
|
||||
```
|
||||
|
||||
## Inference profiles
|
||||
|
||||
OpenClaw discovers **regional and global inference profiles** alongside
|
||||
foundation models. When a profile maps to a known foundation model, the
|
||||
profile inherits that model's capabilities (context window, max tokens,
|
||||
reasoning, vision) and the correct Bedrock request region is injected
|
||||
automatically. This means cross-region Claude profiles work without manual
|
||||
provider overrides.
|
||||
|
||||
Inference profile IDs look like `us.anthropic.claude-opus-4-6-v1:0` (regional)
|
||||
or `anthropic.claude-opus-4-6-v1:0` (global). If the backing model is already
|
||||
in the discovery results, the profile inherits its full capability set;
|
||||
otherwise safe defaults apply.
|
||||
|
||||
No extra configuration is needed. As long as discovery is enabled and the IAM
|
||||
principal has `bedrock:ListInferenceProfiles`, profiles appear alongside
|
||||
foundation models in `openclaw models list`.
|
||||
|
||||
## Notes
|
||||
|
||||
- Bedrock requires **model access** enabled in your AWS account/region.
|
||||
- Automatic discovery needs the `bedrock:ListFoundationModels` permission.
|
||||
- Automatic discovery needs the `bedrock:ListFoundationModels` and
|
||||
`bedrock:ListInferenceProfiles` permissions.
|
||||
- If you rely on auto mode, set one of the supported AWS auth env markers on the
|
||||
gateway host. If you prefer IMDS/shared-config auth without env markers, set
|
||||
`plugins.entries.amazon-bedrock.config.discovery.enabled: true`.
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ export function registerBedrockMantlePlugin(api: OpenClawPluginApi): void {
|
|||
api.registerProvider({
|
||||
id: providerId,
|
||||
label: "Amazon Bedrock Mantle (OpenAI-compatible)",
|
||||
docsPath: "/providers/models",
|
||||
docsPath: "/providers/bedrock-mantle",
|
||||
auth: [],
|
||||
catalog: {
|
||||
order: "simple",
|
||||
|
|
|
|||
Loading…
Reference in New Issue