mirror of https://github.com/openclaw/openclaw.git
* fix(ollama): inject num_ctx for OpenAI-compatible transport * fix(ollama): discover per-model context and preserve higher limits * fix(agents): prefer matching provider model for fallback limits * fix(types): require numeric token limits in provider model merge * fix(types): accept unknown payload in ollama num_ctx wrapper * fix(types): simplify ollama settled-result extraction * config(models): add provider flag for Ollama OpenAI num_ctx injection * config(schema): allow provider num_ctx injection flag * config(labels): label provider num_ctx injection flag * config(help): document provider num_ctx injection flag * agents(ollama): gate OpenAI num_ctx injection with provider config * tests(ollama): cover provider num_ctx injection flag behavior * docs(config): list provider num_ctx injection option * docs(ollama): document OpenAI num_ctx injection toggle * docs(config): clarify merge token-limit precedence * config(help): note merge uses higher model token limits * fix(ollama): cap /api/show discovery concurrency * fix(ollama): restrict num_ctx injection to OpenAI compat * tests(ollama): cover ipv6 and compat num_ctx gating * fix(ollama): detect remote compat endpoints for ollama-labeled providers * fix(ollama): cap per-model /api/show lookups to bound discovery load |
||
|---|---|---|
| .. | ||
| anthropic.md | ||
| bedrock.md | ||
| claude-max-api-proxy.md | ||
| cloudflare-ai-gateway.md | ||
| deepgram.md | ||
| github-copilot.md | ||
| glm.md | ||
| huggingface.md | ||
| index.md | ||
| kilocode.md | ||
| litellm.md | ||
| minimax.md | ||
| mistral.md | ||
| models.md | ||
| moonshot.md | ||
| nvidia.md | ||
| ollama.md | ||
| openai.md | ||
| opencode.md | ||
| openrouter.md | ||
| qianfan.md | ||
| qwen.md | ||
| synthetic.md | ||
| together.md | ||
| venice.md | ||
| vercel-ai-gateway.md | ||
| vllm.md | ||
| xiaomi.md | ||
| zai.md | ||