feat: add fast mode toggle for OpenAI models

This commit is contained in:
Peter Steinberger 2026-03-12 23:30:58 +00:00
parent ddcaec89e9
commit d5bffcdeab
No known key found for this signature in database
66 changed files with 990 additions and 36 deletions

View File

@ -10,6 +10,7 @@ Docs: https://docs.openclaw.ai
- Docs/Kubernetes: Add a starter K8s install path with raw manifests, Kind setup, and deployment docs. Thanks @sallyom @dzianisv @egkristi
- Control UI/dashboard-v2: refresh the gateway dashboard with modular overview, chat, config, agent, and session views, plus a command palette, mobile bottom tabs, and richer chat tools like slash commands, search, export, and pinned messages. (#41503) Thanks @BunsDev.
- Models/plugins: move Ollama, vLLM, and SGLang onto the provider-plugin architecture, with provider-owned onboarding, discovery, model-picker setup, and post-selection hooks so core provider wiring is more modular.
- OpenAI/GPT-5.4 fast mode: add configurable session-level fast toggles across `/fast`, TUI, Control UI, and ACP, with per-model config defaults and OpenAI/Codex request shaping.
### Fixes

View File

@ -47,6 +47,7 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Override per model via `agents.defaults.models["openai/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
- OpenAI Responses WebSocket warm-up defaults to enabled via `params.openaiWsWarmup` (`true`/`false`)
- OpenAI priority processing can be enabled via `agents.defaults.models["openai/<model>"].params.serviceTier`
- OpenAI fast mode can be enabled per model via `agents.defaults.models["<provider>/<model>"].params.fastMode`
```json5
{
@ -78,6 +79,7 @@ OpenClaw ships with the piai catalog. These providers require **no**
- CLI: `openclaw onboard --auth-choice openai-codex` or `openclaw models auth login --provider openai-codex`
- Default transport is `auto` (WebSocket-first, SSE fallback)
- Override per model via `agents.defaults.models["openai-codex/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
- Shares the same `/fast` toggle and `params.fastMode` config as direct `openai/*`
- Policy note: OpenAI Codex OAuth is explicitly supported for external tools/workflows like OpenClaw.
```json5

View File

@ -281,7 +281,7 @@ Runtime override (owner only):
- `openclaw status` — shows store path and recent sessions.
- `openclaw sessions --json` — dumps every entry (filter with `--active <minutes>`).
- `openclaw gateway call sessions.list --params '{}'` — fetch sessions from the running gateway (use `--url`/`--token` for remote gateway access).
- Send `/status` as a standalone message in chat to see whether the agent is reachable, how much of the session context is used, current thinking/verbose toggles, and when your WhatsApp web creds were last refreshed (helps spot relink needs).
- Send `/status` as a standalone message in chat to see whether the agent is reachable, how much of the session context is used, current thinking/fast/verbose toggles, and when your WhatsApp web creds were last refreshed (helps spot relink needs).
- Send `/context list` or `/context detail` to see whats in the system prompt and injected workspace files (and the biggest context contributors).
- Send `/stop` (or standalone abort phrases like `stop`, `stop action`, `stop run`, `stop openclaw`) to abort the current run, clear queued followups for that session, and stop any sub-agent runs spawned from it (the reply includes the stopped count).
- Send `/compact` (optional instructions) as a standalone message to summarize older context and free up window space. See [/concepts/compaction](/concepts/compaction).

View File

@ -165,6 +165,46 @@ pass that field through on direct `openai/*` Responses requests.
Supported values are `auto`, `default`, `flex`, and `priority`.
### OpenAI fast mode
OpenClaw exposes a shared fast-mode toggle for both `openai/*` and
`openai-codex/*` sessions:
- Chat/UI: `/fast status|on|off`
- Config: `agents.defaults.models["<provider>/<model>"].params.fastMode`
When fast mode is enabled, OpenClaw applies a low-latency OpenAI profile:
- `reasoning.effort = "low"` when the payload does not already specify reasoning
- `text.verbosity = "low"` when the payload does not already specify verbosity
- `service_tier = "priority"` for direct `openai/*` Responses calls to `api.openai.com`
Example:
```json5
{
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
params: {
fastMode: true,
},
},
"openai-codex/gpt-5.4": {
params: {
fastMode: true,
},
},
},
},
},
}
```
Session overrides win over config. Clearing the session override in the Sessions UI
returns the session to the configured default.
### OpenAI Responses server-side compaction
For direct OpenAI Responses models (`openai/*` using `api: "openai-responses"` with

View File

@ -14,7 +14,7 @@ The host-only bash chat command uses `! <cmd>` (with `/bash <cmd>` as an alias).
There are two related systems:
- **Commands**: standalone `/...` messages.
- **Directives**: `/think`, `/verbose`, `/reasoning`, `/elevated`, `/exec`, `/model`, `/queue`.
- **Directives**: `/think`, `/fast`, `/verbose`, `/reasoning`, `/elevated`, `/exec`, `/model`, `/queue`.
- Directives are stripped from the message before the model sees it.
- In normal chat messages (not directive-only), they are treated as “inline hints” and do **not** persist session settings.
- In directive-only messages (the message contains only directives), they persist to the session and reply with an acknowledgement.
@ -102,6 +102,7 @@ Text + native (when enabled):
- `/send on|off|inherit` (owner-only)
- `/reset` or `/new [model]` (optional model hint; remainder is passed through)
- `/think <off|minimal|low|medium|high|xhigh>` (dynamic choices by model/provider; aliases: `/thinking`, `/t`)
- `/fast status|on|off` (omitting the arg shows the current effective fast-mode state)
- `/verbose on|full|off` (alias: `/v`)
- `/reasoning on|off|stream` (alias: `/reason`; when on, sends a separate message prefixed `Reasoning:`; `stream` = Telegram draft only)
- `/elevated on|off|ask|full` (alias: `/elev`; `full` skips exec approvals)
@ -130,6 +131,7 @@ Notes:
- Discord thread-binding commands (`/focus`, `/unfocus`, `/agents`, `/session idle`, `/session max-age`) require effective thread bindings to be enabled (`session.threadBindings.enabled` and/or `channels.discord.threadBindings.enabled`).
- ACP command reference and runtime behavior: [ACP Agents](/tools/acp-agents).
- `/verbose` is meant for debugging and extra visibility; keep it **off** in normal use.
- `/fast on|off` persists a session override. Use the Sessions UI `inherit` option to clear it and fall back to config defaults.
- Tool failure summaries are still shown when relevant, but detailed failure text is only included when `/verbose` is `on` or `full`.
- `/reasoning` (and `/verbose`) are risky in group settings: they may reveal internal reasoning or tool output you did not intend to expose. Prefer leaving them off, especially in group chats.
- **Fast path:** command-only messages from allowlisted senders are handled immediately (bypass queue + model).

View File

@ -1,7 +1,7 @@
---
summary: "Directive syntax for /think + /verbose and how they affect model reasoning"
summary: "Directive syntax for /think, /fast, /verbose, and reasoning visibility"
read_when:
- Adjusting thinking or verbose directive parsing or defaults
- Adjusting thinking, fast-mode, or verbose directive parsing or defaults
title: "Thinking Levels"
---
@ -42,6 +42,19 @@ title: "Thinking Levels"
- **Embedded Pi**: the resolved level is passed to the in-process Pi agent runtime.
## Fast mode (/fast)
- Levels: `on|off`.
- Directive-only message toggles a session fast-mode override and replies `Fast mode enabled.` / `Fast mode disabled.`.
- Send `/fast` (or `/fast status`) with no mode to see the current effective fast-mode state.
- OpenClaw resolves fast mode in this order:
1. Inline/directive-only `/fast on|off`
2. Session override
3. Per-model config: `agents.defaults.models["<provider>/<model>"].params.fastMode`
4. Fallback: `off`
- For `openai/*`, fast mode applies the OpenAI fast profile: `service_tier=priority` when supported, plus low reasoning effort and low text verbosity.
- For `openai-codex/*`, fast mode applies the same low-latency profile on Codex Responses. OpenClaw keeps one shared `/fast` toggle across both auth paths.
## Verbose directives (/verbose or /v)
- Levels: `on` (minimal) | `full` | `off` (default).

View File

@ -75,7 +75,7 @@ The Control UI can localize itself on first load based on your browser locale, a
- Stream tool calls + live tool output cards in Chat (agent events)
- Channels: WhatsApp/Telegram/Discord/Slack + plugin channels (Mattermost, etc.) status + QR login + per-channel config (`channels.status`, `web.login.*`, `config.patch`)
- Instances: presence list + refresh (`system-presence`)
- Sessions: list + per-session thinking/verbose overrides (`sessions.list`, `sessions.patch`)
- Sessions: list + per-session thinking/fast/verbose/reasoning overrides (`sessions.list`, `sessions.patch`)
- Cron jobs: list/add/edit/run/enable/disable + run history (`cron.*`)
- Skills: status, enable/disable, install, API key updates (`skills.*`)
- Nodes: list + caps (`node.list`)

View File

@ -37,7 +37,7 @@ Use `--password` if your Gateway uses password auth.
- Header: connection URL, current agent, current session.
- Chat log: user messages, assistant replies, system notices, tool cards.
- Status line: connection/run state (connecting, running, streaming, idle, error).
- Footer: connection state + agent + session + model + think/verbose/reasoning + token counts + deliver.
- Footer: connection state + agent + session + model + think/fast/verbose/reasoning + token counts + deliver.
- Input: text editor with autocomplete.
## Mental model: agents + sessions
@ -92,6 +92,7 @@ Core:
Session controls:
- `/think <off|minimal|low|medium|high>`
- `/fast <status|on|off>`
- `/verbose <on|full|off>`
- `/reasoning <on|off|stream>`
- `/usage <off|tokens|full>`

View File

@ -645,6 +645,77 @@ describe("acp setSessionConfigOption bridge behavior", () => {
sessionStore.clearAllSessionsForTest();
});
it("updates fast mode ACP config options through gateway session patches", async () => {
const sessionStore = createInMemorySessionStore();
const connection = createAcpConnection();
const sessionUpdate = connection.__sessionUpdateMock;
const request = vi.fn(async (method: string, params?: unknown) => {
if (method === "sessions.list") {
return {
ts: Date.now(),
path: "/tmp/sessions.json",
count: 1,
defaults: {
modelProvider: null,
model: null,
contextTokens: null,
},
sessions: [
{
key: "fast-session",
kind: "direct",
updatedAt: Date.now(),
thinkingLevel: "minimal",
modelProvider: "openai",
model: "gpt-5.4",
fastMode: true,
},
],
};
}
if (method === "sessions.patch") {
expect(params).toEqual({
key: "fast-session",
fastMode: true,
});
}
return { ok: true };
}) as GatewayClient["request"];
const agent = new AcpGatewayAgent(connection, createAcpGateway(request), {
sessionStore,
});
await agent.loadSession(createLoadSessionRequest("fast-session"));
sessionUpdate.mockClear();
const result = await agent.setSessionConfigOption(
createSetSessionConfigOptionRequest("fast-session", "fast_mode", "on"),
);
expect(result.configOptions).toEqual(
expect.arrayContaining([
expect.objectContaining({
id: "fast_mode",
currentValue: "on",
}),
]),
);
expect(sessionUpdate).toHaveBeenCalledWith({
sessionId: "fast-session",
update: {
sessionUpdate: "config_option_update",
configOptions: expect.arrayContaining([
expect.objectContaining({
id: "fast_mode",
currentValue: "on",
}),
]),
},
});
sessionStore.clearAllSessionsForTest();
});
it("rejects non-string ACP config option values", async () => {
const sessionStore = createInMemorySessionStore();
const connection = createAcpConnection();

View File

@ -53,6 +53,7 @@ import { ACP_AGENT_INFO, type AcpServerOptions } from "./types.js";
// Maximum allowed prompt size (2MB) to prevent DoS via memory exhaustion (CWE-400, GHSA-cxpw-2g23-2vgw)
const MAX_PROMPT_BYTES = 2 * 1024 * 1024;
const ACP_THOUGHT_LEVEL_CONFIG_ID = "thought_level";
const ACP_FAST_MODE_CONFIG_ID = "fast_mode";
const ACP_VERBOSE_LEVEL_CONFIG_ID = "verbose_level";
const ACP_REASONING_LEVEL_CONFIG_ID = "reasoning_level";
const ACP_RESPONSE_USAGE_CONFIG_ID = "response_usage";
@ -88,6 +89,7 @@ type GatewaySessionPresentationRow = Pick<
| "derivedTitle"
| "updatedAt"
| "thinkingLevel"
| "fastMode"
| "modelProvider"
| "model"
| "verboseLevel"
@ -209,6 +211,13 @@ function buildSessionPresentation(params: {
currentValue: currentModeId,
values: availableLevelIds,
}),
buildSelectConfigOption({
id: ACP_FAST_MODE_CONFIG_ID,
name: "Fast mode",
description: "Controls whether OpenAI sessions use the Gateway fast-mode profile.",
currentValue: row.fastMode ? "on" : "off",
values: ["off", "on"],
}),
buildSelectConfigOption({
id: ACP_VERBOSE_LEVEL_CONFIG_ID,
name: "Tool verbosity",
@ -925,6 +934,7 @@ export class AcpGatewayAgent implements Agent {
thinkingLevel: session.thinkingLevel,
modelProvider: session.modelProvider,
model: session.model,
fastMode: session.fastMode,
verboseLevel: session.verboseLevel,
reasoningLevel: session.reasoningLevel,
responseUsage: session.responseUsage,
@ -940,7 +950,7 @@ export class AcpGatewayAgent implements Agent {
value: string | boolean,
): {
overrides: Partial<GatewaySessionPresentationRow>;
patch: Record<string, string>;
patch: Record<string, string | boolean>;
} {
if (typeof value !== "string") {
throw new Error(
@ -953,6 +963,11 @@ export class AcpGatewayAgent implements Agent {
patch: { thinkingLevel: value },
overrides: { thinkingLevel: value },
};
case ACP_FAST_MODE_CONFIG_ID:
return {
patch: { fastMode: value === "on" },
overrides: { fastMode: value === "on" },
};
case ACP_VERBOSE_LEVEL_CONFIG_ID:
return {
patch: { verboseLevel: value },

50
src/agents/fast-mode.ts Normal file
View File

@ -0,0 +1,50 @@
import { normalizeFastMode } from "../auto-reply/thinking.js";
import type { OpenClawConfig } from "../config/config.js";
import type { SessionEntry } from "../config/sessions.js";
export type FastModeState = {
enabled: boolean;
source: "session" | "config" | "default";
};
function resolveConfiguredFastModeRaw(params: {
cfg: OpenClawConfig | undefined;
provider: string;
model: string;
}): unknown {
const modelKey = `${params.provider}/${params.model}`;
const modelConfig = params.cfg?.agents?.defaults?.models?.[modelKey];
return modelConfig?.params?.fastMode ?? modelConfig?.params?.fast_mode;
}
export function resolveConfiguredFastMode(params: {
cfg: OpenClawConfig | undefined;
provider: string;
model: string;
}): boolean {
return (
normalizeFastMode(
resolveConfiguredFastModeRaw(params) as string | boolean | null | undefined,
) ?? false
);
}
export function resolveFastModeState(params: {
cfg: OpenClawConfig | undefined;
provider: string;
model: string;
sessionEntry?: Pick<SessionEntry, "fastMode"> | undefined;
}): FastModeState {
const sessionOverride = normalizeFastMode(params.sessionEntry?.fastMode);
if (sessionOverride !== undefined) {
return { enabled: sessionOverride, source: "session" };
}
const configuredRaw = resolveConfiguredFastModeRaw(params);
const configured = normalizeFastMode(configuredRaw as string | boolean | null | undefined);
if (configured !== undefined) {
return { enabled: configured, source: "config" };
}
return { enabled: false, source: "default" };
}

View File

@ -204,6 +204,7 @@ describe("applyExtraParamsToAgent", () => {
| Model<"openai-completions">;
options?: SimpleStreamOptions;
cfg?: Record<string, unknown>;
extraParamsOverride?: Record<string, unknown>;
payload?: Record<string, unknown>;
}) {
const payload = params.payload ?? { store: false };
@ -217,6 +218,7 @@ describe("applyExtraParamsToAgent", () => {
params.cfg as Parameters<typeof applyExtraParamsToAgent>[1],
params.applyProvider,
params.applyModelId,
params.extraParamsOverride,
);
const context: Context = { messages: [] };
void agent.streamFn?.(params.model, context, params.options ?? {});
@ -1627,6 +1629,80 @@ describe("applyExtraParamsToAgent", () => {
expect(payload.service_tier).toBe("default");
});
it("injects fast-mode payload defaults for direct OpenAI Responses", () => {
const payload = runResponsesPayloadMutationCase({
applyProvider: "openai",
applyModelId: "gpt-5.4",
cfg: {
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
params: {
fastMode: true,
},
},
},
},
},
},
model: {
api: "openai-responses",
provider: "openai",
id: "gpt-5.4",
baseUrl: "https://api.openai.com/v1",
} as unknown as Model<"openai-responses">,
payload: {
store: false,
},
});
expect(payload.reasoning).toEqual({ effort: "low" });
expect(payload.text).toEqual({ verbosity: "low" });
expect(payload.service_tier).toBe("priority");
});
it("preserves caller-provided OpenAI payload fields when fast mode is enabled", () => {
const payload = runResponsesPayloadMutationCase({
applyProvider: "openai",
applyModelId: "gpt-5.4",
extraParamsOverride: { fastMode: true },
model: {
api: "openai-responses",
provider: "openai",
id: "gpt-5.4",
baseUrl: "https://api.openai.com/v1",
} as unknown as Model<"openai-responses">,
payload: {
reasoning: { effort: "medium" },
text: { verbosity: "high" },
service_tier: "default",
},
});
expect(payload.reasoning).toEqual({ effort: "medium" });
expect(payload.text).toEqual({ verbosity: "high" });
expect(payload.service_tier).toBe("default");
});
it("applies fast-mode defaults for openai-codex responses without service_tier", () => {
const payload = runResponsesPayloadMutationCase({
applyProvider: "openai-codex",
applyModelId: "gpt-5.4",
extraParamsOverride: { fastMode: true },
model: {
api: "openai-codex-responses",
provider: "openai-codex",
id: "gpt-5.4",
baseUrl: "https://chatgpt.com/backend-api",
} as unknown as Model<"openai-codex-responses">,
payload: {
store: false,
},
});
expect(payload.reasoning).toEqual({ effort: "low" });
expect(payload.text).toEqual({ verbosity: "low" });
expect(payload).not.toHaveProperty("service_tier");
});
it("does not inject service_tier for non-openai providers", () => {
const payload = runResponsesPayloadMutationCase({
applyProvider: "azure-openai-responses",

View File

@ -22,8 +22,10 @@ import {
import {
createCodexDefaultTransportWrapper,
createOpenAIDefaultTransportWrapper,
createOpenAIFastModeWrapper,
createOpenAIResponsesContextManagementWrapper,
createOpenAIServiceTierWrapper,
resolveOpenAIFastMode,
resolveOpenAIServiceTier,
} from "./openai-stream-wrappers.js";
import {
@ -437,6 +439,12 @@ export function applyExtraParamsToAgent(
// upstream model-ID heuristics for Gemini 3.1 variants.
agent.streamFn = createGoogleThinkingPayloadWrapper(agent.streamFn, thinkingLevel);
const openAIFastMode = resolveOpenAIFastMode(merged);
if (openAIFastMode) {
log.debug(`applying OpenAI fast mode for ${provider}/${modelId}`);
agent.streamFn = createOpenAIFastModeWrapper(agent.streamFn);
}
const openAIServiceTier = resolveOpenAIServiceTier(merged);
if (openAIServiceTier) {
log.debug(`applying OpenAI service_tier=${openAIServiceTier} for ${provider}/${modelId}`);

View File

@ -4,6 +4,7 @@ import { streamSimple } from "@mariozechner/pi-ai";
import { log } from "./logger.js";
type OpenAIServiceTier = "auto" | "default" | "flex" | "priority";
type OpenAIReasoningEffort = "low" | "medium" | "high";
const OPENAI_RESPONSES_APIS = new Set(["openai-responses"]);
const OPENAI_RESPONSES_PROVIDERS = new Set(["openai", "azure-openai", "azure-openai-responses"]);
@ -168,6 +169,89 @@ export function resolveOpenAIServiceTier(
return normalized;
}
function normalizeOpenAIFastMode(value: unknown): boolean | undefined {
if (typeof value === "boolean") {
return value;
}
if (typeof value !== "string") {
return undefined;
}
const normalized = value.trim().toLowerCase();
if (
normalized === "on" ||
normalized === "true" ||
normalized === "yes" ||
normalized === "1" ||
normalized === "fast"
) {
return true;
}
if (
normalized === "off" ||
normalized === "false" ||
normalized === "no" ||
normalized === "0" ||
normalized === "normal"
) {
return false;
}
return undefined;
}
export function resolveOpenAIFastMode(
extraParams: Record<string, unknown> | undefined,
): boolean | undefined {
const raw = extraParams?.fastMode ?? extraParams?.fast_mode;
const normalized = normalizeOpenAIFastMode(raw);
if (raw !== undefined && normalized === undefined) {
const rawSummary = typeof raw === "string" ? raw : typeof raw;
log.warn(`ignoring invalid OpenAI fast mode param: ${rawSummary}`);
}
return normalized;
}
function resolveFastModeReasoningEffort(modelId: unknown): OpenAIReasoningEffort {
if (typeof modelId !== "string") {
return "low";
}
const normalized = modelId.trim().toLowerCase();
// Keep fast mode broadly compatible across GPT-5 family variants by using
// the lowest shared non-disabled effort that current transports accept.
if (normalized.startsWith("gpt-5")) {
return "low";
}
return "low";
}
function applyOpenAIFastModePayloadOverrides(params: {
payloadObj: Record<string, unknown>;
model: { provider?: unknown; id?: unknown; baseUrl?: unknown; api?: unknown };
}): void {
if (params.payloadObj.reasoning === undefined) {
params.payloadObj.reasoning = {
effort: resolveFastModeReasoningEffort(params.model.id),
};
}
const existingText = params.payloadObj.text;
if (existingText === undefined) {
params.payloadObj.text = { verbosity: "low" };
} else if (existingText && typeof existingText === "object" && !Array.isArray(existingText)) {
const textObj = existingText as Record<string, unknown>;
if (textObj.verbosity === undefined) {
textObj.verbosity = "low";
}
}
if (
params.model.provider === "openai" &&
params.payloadObj.service_tier === undefined &&
isOpenAIPublicApiBaseUrl(params.model.baseUrl)
) {
params.payloadObj.service_tier = "priority";
}
}
export function createOpenAIResponsesContextManagementWrapper(
baseStreamFn: StreamFn | undefined,
extraParams: Record<string, unknown> | undefined,
@ -203,6 +287,31 @@ export function createOpenAIResponsesContextManagementWrapper(
};
}
export function createOpenAIFastModeWrapper(baseStreamFn: StreamFn | undefined): StreamFn {
const underlying = baseStreamFn ?? streamSimple;
return (model, context, options) => {
if (
(model.api !== "openai-responses" && model.api !== "openai-codex-responses") ||
(model.provider !== "openai" && model.provider !== "openai-codex")
) {
return underlying(model, context, options);
}
const originalOnPayload = options?.onPayload;
return underlying(model, context, {
...options,
onPayload: (payload) => {
if (payload && typeof payload === "object") {
applyOpenAIFastModePayloadOverrides({
payloadObj: payload as Record<string, unknown>,
model,
});
}
return originalOnPayload?.(payload, model);
},
});
};
}
export function createOpenAIServiceTierWrapper(
baseStreamFn: StreamFn | undefined,
serviceTier: OpenAIServiceTier,

View File

@ -892,6 +892,7 @@ export async function runEmbeddedPiAgent(
agentId: workspaceResolution.agentId,
legacyBeforeAgentStartResult,
thinkLevel,
fastMode: params.fastMode,
verboseLevel: params.verboseLevel,
reasoningLevel: params.reasoningLevel,
toolResultFormat: resolvedToolResultFormat,

View File

@ -1930,7 +1930,10 @@ export async function runEmbeddedAttempt(
params.config,
params.provider,
params.modelId,
params.streamParams,
{
...params.streamParams,
fastMode: params.fastMode,
},
params.thinkLevel,
sessionAgentId,
);

View File

@ -79,6 +79,7 @@ export type RunEmbeddedPiAgentParams = {
authProfileId?: string;
authProfileIdSource?: "auto" | "user";
thinkLevel?: ThinkLevel;
fastMode?: boolean;
verboseLevel?: VerboseLevel;
reasoningLevel?: ReasoningLevel;
toolResultFormat?: ToolResultFormat;

View File

@ -597,6 +597,22 @@ function buildChatCommands(): ChatCommandDefinition[] {
],
argsMenu: "auto",
}),
defineChatCommand({
key: "fast",
nativeName: "fast",
description: "Toggle fast mode.",
textAlias: "/fast",
category: "options",
args: [
{
name: "mode",
description: "status, on, or off",
type: "string",
choices: ["status", "on", "off"],
},
],
argsMenu: "auto",
}),
defineChatCommand({
key: "reasoning",
nativeName: "reasoning",

View File

@ -198,6 +198,17 @@ describe("commands registry", () => {
]);
});
it("registers fast mode as a first-class options command", () => {
const fast = listChatCommands().find((command) => command.key === "fast");
expect(fast).toMatchObject({
nativeName: "fast",
textAliases: ["/fast"],
category: "options",
});
const modeArg = fast?.args?.find((arg) => arg.name === "mode");
expect(modeArg?.choices).toEqual(["status", "on", "off"]);
});
it("detects known text commands", () => {
const detection = getCommandDetection();
expect(detection.exact.has("/commands")).toBe(true);

View File

@ -4,6 +4,7 @@ import { withTempHome as withTempHomeBase } from "../../test/helpers/temp-home.j
import { loadModelCatalog } from "../agents/model-catalog.js";
import { runEmbeddedPiAgent } from "../agents/pi-embedded.js";
import { loadSessionStore } from "../config/sessions.js";
import { runEmbeddedPiAgentMock } from "./reply.directive.directive-behavior.e2e-mocks.js";
export { loadModelCatalog } from "../agents/model-catalog.js";
export { runEmbeddedPiAgent } from "../agents/pi-embedded.js";
@ -134,7 +135,7 @@ export function assertElevatedOffStatusReply(text: string | undefined) {
export function installDirectiveBehaviorE2EHooks() {
beforeEach(() => {
vi.mocked(runEmbeddedPiAgent).mockReset();
runEmbeddedPiAgentMock.mockReset();
vi.mocked(loadModelCatalog).mockResolvedValue(DEFAULT_TEST_MODEL_CATALOG);
});

View File

@ -1,5 +1,5 @@
import "./reply.directive.directive-behavior.e2e-mocks.js";
import { describe, expect, it, vi } from "vitest";
import { describe, expect, it } from "vitest";
import type { OpenClawConfig } from "../config/config.js";
import { loadSessionStore } from "../config/sessions.js";
import {
@ -10,10 +10,10 @@ import {
makeRestrictedElevatedDisabledConfig,
makeWhatsAppDirectiveConfig,
replyText,
runEmbeddedPiAgent,
sessionStorePath,
withTempHome,
} from "./reply.directive.directive-behavior.e2e-harness.js";
import { runEmbeddedPiAgentMock } from "./reply.directive.directive-behavior.e2e-mocks.js";
import { getReplyFromConfig } from "./reply.js";
const COMMAND_MESSAGE_BASE = {
@ -126,6 +126,18 @@ describe("directive behavior", () => {
it("reports current directive defaults when no arguments are provided", async () => {
await withTempHome(async (home) => {
const fastText = await runCommand(home, "/fast", {
defaults: {
models: {
"anthropic/claude-opus-4-5": {
params: { fastMode: true },
},
},
},
});
expect(fastText).toContain("Current fast mode: on (config)");
expect(fastText).toContain("Options: on, off.");
const verboseText = await runCommand(home, "/verbose", {
defaults: { verboseDefault: "on" },
});
@ -158,7 +170,28 @@ describe("directive behavior", () => {
expect(execText).toContain(
"Options: host=sandbox|gateway|node, security=deny|allowlist|full, ask=off|on-miss|always, node=<id>.",
);
expect(runEmbeddedPiAgent).not.toHaveBeenCalled();
expect(runEmbeddedPiAgentMock).not.toHaveBeenCalled();
});
});
it("persists fast toggles across /status and /fast", async () => {
await withTempHome(async (home) => {
const storePath = sessionStorePath(home);
const onText = await runCommand(home, "/fast on");
expect(onText).toContain("Fast mode enabled");
expect(loadSessionStore(storePath)["agent:main:main"]?.fastMode).toBe(true);
const statusText = await runCommand(home, "/status");
const optionsLine = statusText?.split("\n").find((line) => line.trim().startsWith("⚙️"));
expect(optionsLine).toContain("Fast: on");
const offText = await runCommand(home, "/fast off");
expect(offText).toContain("Fast mode disabled");
expect(loadSessionStore(storePath)["agent:main:main"]?.fastMode).toBe(false);
const fastText = await runCommand(home, "/fast");
expect(fastText).toContain("Current fast mode: off");
expect(runEmbeddedPiAgentMock).not.toHaveBeenCalled();
});
});
it("persists elevated toggles across /status and /elevated", async () => {
@ -181,7 +214,7 @@ describe("directive behavior", () => {
const store = loadSessionStore(storePath);
expect(store["agent:main:main"]?.elevatedLevel).toBe("on");
expect(runEmbeddedPiAgent).not.toHaveBeenCalled();
expect(runEmbeddedPiAgentMock).not.toHaveBeenCalled();
});
});
it("enforces per-agent elevated restrictions and status visibility", async () => {
@ -217,7 +250,7 @@ describe("directive behavior", () => {
);
const statusText = replyText(statusRes);
expect(statusText).not.toContain("elevated");
expect(runEmbeddedPiAgent).not.toHaveBeenCalled();
expect(runEmbeddedPiAgentMock).not.toHaveBeenCalled();
});
});
it("applies per-agent allowlist requirements before allowing elevated", async () => {
@ -245,7 +278,7 @@ describe("directive behavior", () => {
const allowedText = replyText(allowedRes);
expect(allowedText).toContain("Elevated mode set to ask");
expect(runEmbeddedPiAgent).not.toHaveBeenCalled();
expect(runEmbeddedPiAgentMock).not.toHaveBeenCalled();
});
});
it("handles runtime warning, invalid level, and multi-directive elevated inputs", async () => {
@ -280,7 +313,7 @@ describe("directive behavior", () => {
expect(text).toContain(snippet);
}
}
expect(runEmbeddedPiAgent).not.toHaveBeenCalled();
expect(runEmbeddedPiAgentMock).not.toHaveBeenCalled();
});
});
it("persists queue overrides and reset behavior", async () => {
@ -317,12 +350,12 @@ describe("directive behavior", () => {
expect(entry?.queueDebounceMs).toBeUndefined();
expect(entry?.queueCap).toBeUndefined();
expect(entry?.queueDrop).toBeUndefined();
expect(runEmbeddedPiAgent).not.toHaveBeenCalled();
expect(runEmbeddedPiAgentMock).not.toHaveBeenCalled();
});
});
it("strips inline elevated directives from the user text (does not persist session override)", async () => {
await withTempHome(async (home) => {
vi.mocked(runEmbeddedPiAgent).mockResolvedValue({
runEmbeddedPiAgentMock.mockResolvedValue({
payloads: [{ text: "ok" }],
meta: {
durationMs: 1,
@ -346,7 +379,7 @@ describe("directive behavior", () => {
const store = loadSessionStore(storePath);
expect(store["agent:main:main"]?.elevatedLevel).toBeUndefined();
const calls = vi.mocked(runEmbeddedPiAgent).mock.calls;
const calls = runEmbeddedPiAgentMock.mock.calls;
expect(calls.length).toBeGreaterThan(0);
const call = calls[0]?.[0];
expect(call?.prompt).toContain("hello there");

View File

@ -8,7 +8,7 @@ import {
extractThinkDirective,
extractVerboseDirective,
} from "./reply.js";
import { extractStatusDirective } from "./reply/directives.js";
import { extractFastDirective, extractStatusDirective } from "./reply/directives.js";
describe("directive parsing", () => {
it("ignores verbose directive inside URL", () => {
@ -49,6 +49,12 @@ describe("directive parsing", () => {
expect(res.reasoningLevel).toBe("stream");
});
it("matches fast directive", () => {
const res = extractFastDirective("/fast on please");
expect(res.hasDirective).toBe(true);
expect(res.fastMode).toBe(true);
});
it("matches elevated with leading space", () => {
const res = extractElevatedDirective(" please /elevated on now");
expect(res.hasDirective).toBe(true);
@ -106,6 +112,14 @@ describe("directive parsing", () => {
expect(res.cleaned).toBe("");
});
it("matches fast with no argument", () => {
const res = extractFastDirective("/fast:");
expect(res.hasDirective).toBe(true);
expect(res.fastMode).toBeUndefined();
expect(res.rawLevel).toBeUndefined();
expect(res.cleaned).toBe("");
});
it("matches reasoning with no argument", () => {
const res = extractReasoningDirective("/reasoning:");
expect(res.hasDirective).toBe(true);

View File

@ -26,6 +26,7 @@ import { handlePluginCommand } from "./commands-plugin.js";
import {
handleAbortTrigger,
handleActivationCommand,
handleFastCommand,
handleRestartCommand,
handleSessionCommand,
handleSendPolicyCommand,
@ -176,6 +177,7 @@ export async function handleCommands(params: HandleCommandsParams): Promise<Comm
handleBashCommand,
handleActivationCommand,
handleSendPolicyCommand,
handleFastCommand,
handleUsageCommand,
handleSessionCommand,
handleRestartCommand,

View File

@ -1,3 +1,4 @@
import { resolveFastModeState } from "../../agents/fast-mode.js";
import { parseDurationMs } from "../../cli/parse-duration.js";
import { isRestartEnabled } from "../../config/commands.js";
import {
@ -22,7 +23,7 @@ import {
import { formatTokenCount, formatUsd } from "../../utils/usage-format.js";
import { parseActivationCommand } from "../group-activation.js";
import { parseSendPolicyCommand } from "../send-policy.js";
import { normalizeUsageDisplay, resolveResponseUsageMode } from "../thinking.js";
import { normalizeFastMode, normalizeUsageDisplay, resolveResponseUsageMode } from "../thinking.js";
import { isDiscordSurface, isTelegramSurface, resolveChannelAccountId } from "./channel-context.js";
import { handleAbortTrigger, handleStopCommand } from "./commands-session-abort.js";
import { persistSessionEntry } from "./commands-session-store.js";
@ -291,6 +292,57 @@ export const handleUsageCommand: CommandHandler = async (params, allowTextComman
};
};
export const handleFastCommand: CommandHandler = async (params, allowTextCommands) => {
if (!allowTextCommands) {
return null;
}
const normalized = params.command.commandBodyNormalized;
if (normalized !== "/fast" && !normalized.startsWith("/fast ")) {
return null;
}
if (!params.command.isAuthorizedSender) {
logVerbose(
`Ignoring /fast from unauthorized sender: ${params.command.senderId || "<unknown>"}`,
);
return { shouldContinue: false };
}
const rawArgs = normalized === "/fast" ? "" : normalized.slice("/fast".length).trim();
const rawMode = rawArgs.toLowerCase();
if (!rawMode || rawMode === "status") {
const state = resolveFastModeState({
cfg: params.cfg,
provider: params.provider,
model: params.model,
sessionEntry: params.sessionEntry,
});
const suffix =
state.source === "config" ? " (config)" : state.source === "default" ? " (default)" : "";
return {
shouldContinue: false,
reply: { text: `⚙️ Current fast mode: ${state.enabled ? "on" : "off"}${suffix}.` },
};
}
const nextMode = normalizeFastMode(rawMode);
if (nextMode === undefined) {
return {
shouldContinue: false,
reply: { text: "⚙️ Usage: /fast status|on|off" },
};
}
if (params.sessionEntry && params.sessionStore && params.sessionKey) {
params.sessionEntry.fastMode = nextMode;
await persistSessionEntry(params);
}
return {
shouldContinue: false,
reply: { text: `⚙️ Fast mode ${nextMode ? "enabled" : "disabled"}.` },
};
};
export const handleSessionCommand: CommandHandler = async (params, allowTextCommands) => {
if (!allowTextCommands) {
return null;

View File

@ -3,6 +3,7 @@ import {
resolveDefaultAgentId,
resolveSessionAgentId,
} from "../../agents/agent-scope.js";
import { resolveFastModeState } from "../../agents/fast-mode.js";
import { resolveModelAuthLabel } from "../../agents/model-auth-label.js";
import { listSubagentRunsForRequester } from "../../agents/subagent-registry.js";
import {
@ -40,6 +41,7 @@ export async function buildStatusReply(params: {
model: string;
contextTokens: number;
resolvedThinkLevel?: ThinkLevel;
resolvedFastMode?: boolean;
resolvedVerboseLevel: VerboseLevel;
resolvedReasoningLevel: ReasoningLevel;
resolvedElevatedLevel?: ElevatedLevel;
@ -60,6 +62,7 @@ export async function buildStatusReply(params: {
model,
contextTokens,
resolvedThinkLevel,
resolvedFastMode,
resolvedVerboseLevel,
resolvedReasoningLevel,
resolvedElevatedLevel,
@ -160,6 +163,14 @@ export async function buildStatusReply(params: {
})
: selectedModelAuth;
const agentDefaults = cfg.agents?.defaults ?? {};
const effectiveFastMode =
resolvedFastMode ??
resolveFastModeState({
cfg,
provider,
model,
sessionEntry,
}).enabled;
const statusText = buildStatusMessage({
config: cfg,
agent: {
@ -181,6 +192,7 @@ export async function buildStatusReply(params: {
sessionStorePath: storePath,
groupActivation,
resolvedThink: resolvedThinkLevel ?? (await resolveDefaultThinkingLevel()),
resolvedFast: effectiveFastMode,
resolvedVerbose: resolvedVerboseLevel,
resolvedReasoning: resolvedReasoningLevel,
resolvedElevated: resolvedElevatedLevel,

View File

@ -48,8 +48,13 @@ export async function applyInlineDirectivesFastLane(
}
const agentCfg = params.agentCfg;
const { currentThinkLevel, currentVerboseLevel, currentReasoningLevel, currentElevatedLevel } =
await resolveCurrentDirectiveLevels({
const {
currentThinkLevel,
currentFastMode,
currentVerboseLevel,
currentReasoningLevel,
currentElevatedLevel,
} = await resolveCurrentDirectiveLevels({
sessionEntry,
agentCfg,
resolveDefaultThinkingLevel: () => modelState.resolveDefaultThinkingLevel(),
@ -77,6 +82,7 @@ export async function applyInlineDirectivesFastLane(
initialModelLabel: params.initialModelLabel,
formatModelSwitchEvent,
currentThinkLevel,
currentFastMode,
currentVerboseLevel,
currentReasoningLevel,
currentElevatedLevel,

View File

@ -3,6 +3,7 @@ import {
resolveAgentDir,
resolveSessionAgentId,
} from "../../agents/agent-scope.js";
import { resolveFastModeState } from "../../agents/fast-mode.js";
import { resolveSandboxRuntimeStatus } from "../../agents/sandbox.js";
import type { OpenClawConfig } from "../../config/config.js";
import { type SessionEntry, updateSessionStore } from "../../config/sessions.js";
@ -78,6 +79,7 @@ export async function handleDirectiveOnly(
initialModelLabel,
formatModelSwitchEvent,
currentThinkLevel,
currentFastMode,
currentVerboseLevel,
currentReasoningLevel,
currentElevatedLevel,
@ -131,6 +133,15 @@ export async function handleDirectiveOnly(
const resolvedProvider = modelSelection?.provider ?? provider;
const resolvedModel = modelSelection?.model ?? model;
const fastModeState = resolveFastModeState({
cfg: params.cfg,
provider: resolvedProvider,
model: resolvedModel,
sessionEntry,
});
const effectiveFastMode = directives.fastMode ?? currentFastMode ?? fastModeState.enabled;
const effectiveFastModeSource =
directives.fastMode !== undefined ? "session" : fastModeState.source;
if (directives.hasThinkDirective && !directives.thinkLevel) {
// If no argument was provided, show the current level
@ -158,6 +169,25 @@ export async function handleDirectiveOnly(
text: `Unrecognized verbose level "${directives.rawVerboseLevel}". Valid levels: off, on, full.`,
};
}
if (directives.hasFastDirective && directives.fastMode === undefined) {
if (!directives.rawFastMode) {
const sourceSuffix =
effectiveFastModeSource === "config"
? " (config)"
: effectiveFastModeSource === "default"
? " (default)"
: "";
return {
text: withOptions(
`Current fast mode: ${effectiveFastMode ? "on" : "off"}${sourceSuffix}.`,
"on, off",
),
};
}
return {
text: `Unrecognized fast mode "${directives.rawFastMode}". Valid levels: on, off.`,
};
}
if (directives.hasReasoningDirective && !directives.reasoningLevel) {
if (!directives.rawReasoningLevel) {
const level = currentReasoningLevel ?? "off";
@ -279,11 +309,18 @@ export async function handleDirectiveOnly(
directives.elevatedLevel !== undefined &&
elevatedEnabled &&
elevatedAllowed;
const fastModeChanged =
directives.hasFastDirective &&
directives.fastMode !== undefined &&
directives.fastMode !== currentFastMode;
let reasoningChanged =
directives.hasReasoningDirective && directives.reasoningLevel !== undefined;
if (directives.hasThinkDirective && directives.thinkLevel) {
sessionEntry.thinkingLevel = directives.thinkLevel;
}
if (directives.hasFastDirective && directives.fastMode !== undefined) {
sessionEntry.fastMode = directives.fastMode;
}
if (shouldDowngradeXHigh) {
sessionEntry.thinkingLevel = "high";
}
@ -380,6 +417,13 @@ export async function handleDirectiveOnly(
: `Thinking level set to ${directives.thinkLevel}.`,
);
}
if (directives.hasFastDirective && directives.fastMode !== undefined) {
parts.push(
directives.fastMode
? formatDirectiveAck("Fast mode enabled.")
: formatDirectiveAck("Fast mode disabled."),
);
}
if (directives.hasVerboseDirective && directives.verboseLevel) {
parts.push(
directives.verboseLevel === "off"
@ -459,6 +503,12 @@ export async function handleDirectiveOnly(
if (directives.hasQueueDirective && directives.dropPolicy) {
parts.push(formatDirectiveAck(`Queue drop set to ${directives.dropPolicy}.`));
}
if (fastModeChanged) {
enqueueSystemEvent(`Fast mode ${sessionEntry.fastMode ? "enabled" : "disabled"}.`, {
sessionKey,
contextKey: `fast:${sessionEntry.fastMode ? "on" : "off"}`,
});
}
const ack = parts.join(" ").trim();
if (!ack && directives.hasStatusDirective) {
return undefined;

View File

@ -3,6 +3,7 @@ import type { ElevatedLevel, ReasoningLevel, ThinkLevel, VerboseLevel } from "..
export async function resolveCurrentDirectiveLevels(params: {
sessionEntry?: {
thinkingLevel?: unknown;
fastMode?: unknown;
verboseLevel?: unknown;
reasoningLevel?: unknown;
elevatedLevel?: unknown;
@ -15,6 +16,7 @@ export async function resolveCurrentDirectiveLevels(params: {
resolveDefaultThinkingLevel: () => Promise<ThinkLevel | undefined>;
}): Promise<{
currentThinkLevel: ThinkLevel | undefined;
currentFastMode: boolean | undefined;
currentVerboseLevel: VerboseLevel | undefined;
currentReasoningLevel: ReasoningLevel;
currentElevatedLevel: ElevatedLevel | undefined;
@ -24,6 +26,8 @@ export async function resolveCurrentDirectiveLevels(params: {
(await params.resolveDefaultThinkingLevel()) ??
(params.agentCfg?.thinkingDefault as ThinkLevel | undefined);
const currentThinkLevel = resolvedDefaultThinkLevel;
const currentFastMode =
typeof params.sessionEntry?.fastMode === "boolean" ? params.sessionEntry.fastMode : undefined;
const currentVerboseLevel =
(params.sessionEntry?.verboseLevel as VerboseLevel | undefined) ??
(params.agentCfg?.verboseDefault as VerboseLevel | undefined);
@ -34,6 +38,7 @@ export async function resolveCurrentDirectiveLevels(params: {
(params.agentCfg?.elevatedDefault as ElevatedLevel | undefined);
return {
currentThinkLevel,
currentFastMode,
currentVerboseLevel,
currentReasoningLevel,
currentElevatedLevel,

View File

@ -32,6 +32,7 @@ export type HandleDirectiveOnlyCoreParams = {
export type HandleDirectiveOnlyParams = HandleDirectiveOnlyCoreParams & {
currentThinkLevel?: ThinkLevel;
currentFastMode?: boolean;
currentVerboseLevel?: VerboseLevel;
currentReasoningLevel?: ReasoningLevel;
currentElevatedLevel?: ElevatedLevel;

View File

@ -6,6 +6,7 @@ import type { ElevatedLevel, ReasoningLevel, ThinkLevel, VerboseLevel } from "./
import {
extractElevatedDirective,
extractExecDirective,
extractFastDirective,
extractReasoningDirective,
extractStatusDirective,
extractThinkDirective,
@ -23,6 +24,9 @@ export type InlineDirectives = {
hasVerboseDirective: boolean;
verboseLevel?: VerboseLevel;
rawVerboseLevel?: string;
hasFastDirective: boolean;
fastMode?: boolean;
rawFastMode?: string;
hasReasoningDirective: boolean;
reasoningLevel?: ReasoningLevel;
rawReasoningLevel?: string;
@ -80,12 +84,18 @@ export function parseInlineDirectives(
rawLevel: rawVerboseLevel,
hasDirective: hasVerboseDirective,
} = extractVerboseDirective(thinkCleaned);
const {
cleaned: fastCleaned,
fastMode,
rawLevel: rawFastMode,
hasDirective: hasFastDirective,
} = extractFastDirective(verboseCleaned);
const {
cleaned: reasoningCleaned,
reasoningLevel,
rawLevel: rawReasoningLevel,
hasDirective: hasReasoningDirective,
} = extractReasoningDirective(verboseCleaned);
} = extractReasoningDirective(fastCleaned);
const {
cleaned: elevatedCleaned,
elevatedLevel,
@ -151,6 +161,9 @@ export function parseInlineDirectives(
hasVerboseDirective,
verboseLevel,
rawVerboseLevel,
hasFastDirective,
fastMode,
rawFastMode,
hasReasoningDirective,
reasoningLevel,
rawReasoningLevel,
@ -201,6 +214,7 @@ export function isDirectiveOnly(params: {
if (
!directives.hasThinkDirective &&
!directives.hasVerboseDirective &&
!directives.hasFastDirective &&
!directives.hasReasoningDirective &&
!directives.hasElevatedDirective &&
!directives.hasExecDirective &&

View File

@ -2,6 +2,7 @@ import { escapeRegExp } from "../../utils.js";
import type { NoticeLevel, ReasoningLevel } from "../thinking.js";
import {
type ElevatedLevel,
normalizeFastMode,
normalizeElevatedLevel,
normalizeNoticeLevel,
normalizeReasoningLevel,
@ -124,6 +125,24 @@ export function extractVerboseDirective(body?: string): {
};
}
export function extractFastDirective(body?: string): {
cleaned: string;
fastMode?: boolean;
rawLevel?: string;
hasDirective: boolean;
} {
if (!body) {
return { cleaned: "", hasDirective: false };
}
const extracted = extractLevelDirective(body, ["fast"], normalizeFastMode);
return {
cleaned: extracted.cleaned,
fastMode: extracted.level,
rawLevel: extracted.rawLevel,
hasDirective: extracted.hasDirective,
};
}
export function extractNoticeDirective(body?: string): {
cleaned: string;
noticeLevel?: NoticeLevel;

View File

@ -150,6 +150,7 @@ export async function applyInlineDirectiveOverrides(params: {
}
const {
currentThinkLevel: resolvedDefaultThinkLevel,
currentFastMode,
currentVerboseLevel,
currentReasoningLevel,
currentElevatedLevel,
@ -162,6 +163,7 @@ export async function applyInlineDirectiveOverrides(params: {
const directiveReply = await handleDirectiveOnly({
...createDirectiveHandlingBase(),
currentThinkLevel,
currentFastMode,
currentVerboseLevel,
currentReasoningLevel,
currentElevatedLevel,
@ -201,6 +203,7 @@ export async function applyInlineDirectiveOverrides(params: {
const hasAnyDirective =
directives.hasThinkDirective ||
directives.hasFastDirective ||
directives.hasVerboseDirective ||
directives.hasReasoningDirective ||
directives.hasElevatedDirective ||

View File

@ -26,6 +26,9 @@ export function clearInlineDirectives(cleaned: string): InlineDirectives {
hasVerboseDirective: false,
verboseLevel: undefined,
rawVerboseLevel: undefined,
hasFastDirective: false,
fastMode: undefined,
rawFastMode: undefined,
hasReasoningDirective: false,
reasoningLevel: undefined,
rawReasoningLevel: undefined,

View File

@ -1,4 +1,5 @@
import type { ExecToolDefaults } from "../../agents/bash-tools.js";
import { resolveFastModeState } from "../../agents/fast-mode.js";
import type { ModelAliasIndex } from "../../agents/model-selection.js";
import { resolveSandboxRuntimeStatus } from "../../agents/sandbox.js";
import type { SkillCommandSpec } from "../../agents/skills.js";
@ -37,6 +38,7 @@ export type ReplyDirectiveContinuation = {
elevatedFailures: Array<{ gate: string; key: string }>;
defaultActivation: ReturnType<typeof defaultGroupActivation>;
resolvedThinkLevel: ThinkLevel | undefined;
resolvedFastMode: boolean;
resolvedVerboseLevel: VerboseLevel | undefined;
resolvedReasoningLevel: ReasoningLevel;
resolvedElevatedLevel: ElevatedLevel;
@ -228,6 +230,7 @@ export async function resolveReplyDirectives(params: {
const hasInlineDirective =
parsedDirectives.hasThinkDirective ||
parsedDirectives.hasVerboseDirective ||
parsedDirectives.hasFastDirective ||
parsedDirectives.hasReasoningDirective ||
parsedDirectives.hasElevatedDirective ||
parsedDirectives.hasExecDirective ||
@ -260,6 +263,7 @@ export async function resolveReplyDirectives(params: {
...parsedDirectives,
hasThinkDirective: false,
hasVerboseDirective: false,
hasFastDirective: false,
hasReasoningDirective: false,
hasStatusDirective: false,
hasModelDirective: false,
@ -340,6 +344,14 @@ export async function resolveReplyDirectives(params: {
const defaultActivation = defaultGroupActivation(requireMention);
const resolvedThinkLevel =
directives.thinkLevel ?? (sessionEntry?.thinkingLevel as ThinkLevel | undefined);
const resolvedFastMode =
directives.fastMode ??
resolveFastModeState({
cfg,
provider,
model,
sessionEntry,
}).enabled;
const resolvedVerboseLevel =
directives.verboseLevel ??
@ -479,6 +491,7 @@ export async function resolveReplyDirectives(params: {
elevatedFailures,
defaultActivation,
resolvedThinkLevel: resolvedThinkLevelWithDefault,
resolvedFastMode,
resolvedVerboseLevel,
resolvedReasoningLevel,
resolvedElevatedLevel,

View File

@ -30,8 +30,13 @@ import type { createModelSelectionState } from "./model-selection.js";
import { extractInlineSimpleCommand } from "./reply-inline.js";
import type { TypingController } from "./typing.js";
const builtinSlashCommands = (() => {
return listReservedChatSlashCommandNames([
let builtinSlashCommands: Set<string> | null = null;
function getBuiltinSlashCommands(): Set<string> {
if (builtinSlashCommands) {
return builtinSlashCommands;
}
builtinSlashCommands = listReservedChatSlashCommandNames([
"think",
"verbose",
"reasoning",
@ -41,7 +46,8 @@ const builtinSlashCommands = (() => {
"status",
"queue",
]);
})();
return builtinSlashCommands;
}
function resolveSlashCommandName(commandBodyNormalized: string): string | null {
const trimmed = commandBodyNormalized.trim();
@ -163,7 +169,7 @@ export async function handleInlineActions(params: {
allowTextCommands &&
slashCommandName !== null &&
// `/skill …` needs the full skill command list.
(slashCommandName === "skill" || !builtinSlashCommands.has(slashCommandName));
(slashCommandName === "skill" || !getBuiltinSlashCommands().has(slashCommandName));
const skillCommands =
shouldLoadSkillCommands && params.skillCommands
? params.skillCommands

View File

@ -1,6 +1,7 @@
import crypto from "node:crypto";
import { resolveSessionAuthProfileOverride } from "../../agents/auth-profiles/session-override.js";
import type { ExecToolDefaults } from "../../agents/bash-tools.js";
import { resolveFastModeState } from "../../agents/fast-mode.js";
import {
abortEmbeddedPiRun,
isEmbeddedPiRunActive,
@ -509,6 +510,12 @@ export async function runPreparedReply(
authProfileId,
authProfileIdSource,
thinkLevel: resolvedThinkLevel,
fastMode: resolveFastModeState({
cfg,
provider,
model,
sessionEntry,
}).enabled,
verboseLevel: resolvedVerboseLevel,
reasoningLevel: resolvedReasoningLevel,
elevatedLevel: resolvedElevatedLevel,

View File

@ -113,6 +113,23 @@ describe("buildStatusMessage", () => {
expect(normalized).toContain("Reasoning: on");
});
it("shows fast mode when enabled", () => {
const text = buildStatusMessage({
agent: {
model: "openai/gpt-5.4",
},
sessionEntry: {
sessionId: "fast",
updatedAt: 0,
fastMode: true,
},
sessionKey: "agent:main:main",
queue: { mode: "collect", depth: 0 },
});
expect(normalizeTestText(text)).toContain("Fast: on");
});
it("notes channel model overrides in status output", () => {
const text = buildStatusMessage({
config: {
@ -708,6 +725,10 @@ describe("buildHelpMessage", () => {
expect(text).not.toContain("/config");
expect(text).not.toContain("/debug");
});
it("includes /fast in help output", () => {
expect(buildHelpMessage()).toContain("/fast on|off");
});
});
describe("buildCommandsMessagePaginated", () => {

View File

@ -77,6 +77,7 @@ type StatusArgs = {
sessionStorePath?: string;
groupActivation?: "mention" | "always";
resolvedThink?: ThinkLevel;
resolvedFast?: boolean;
resolvedVerbose?: VerboseLevel;
resolvedReasoning?: ReasoningLevel;
resolvedElevated?: ElevatedLevel;
@ -510,6 +511,7 @@ export function buildStatusMessage(args: StatusArgs): string {
args.resolvedThink ?? args.sessionEntry?.thinkingLevel ?? args.agent?.thinkingDefault ?? "off";
const verboseLevel =
args.resolvedVerbose ?? args.sessionEntry?.verboseLevel ?? args.agent?.verboseDefault ?? "off";
const fastMode = args.resolvedFast ?? args.sessionEntry?.fastMode ?? false;
const reasoningLevel = args.resolvedReasoning ?? args.sessionEntry?.reasoningLevel ?? "off";
const elevatedLevel =
args.resolvedElevated ??
@ -556,6 +558,7 @@ export function buildStatusMessage(args: StatusArgs): string {
const optionParts = [
`Runtime: ${runtime.label}`,
`Think: ${thinkLevel}`,
fastMode ? "Fast: on" : null,
verboseLabel,
reasoningLevel !== "off" ? `Reasoning: ${reasoningLevel}` : null,
elevatedLabel,
@ -728,7 +731,7 @@ export function buildHelpMessage(cfg?: OpenClawConfig): string {
lines.push(" /new | /reset | /compact [instructions] | /stop");
lines.push("");
const optionParts = ["/think <level>", "/model <id>", "/verbose on|off"];
const optionParts = ["/think <level>", "/model <id>", "/fast on|off", "/verbose on|off"];
if (isCommandFlagEnabled(cfg, "config")) {
optionParts.push("/config");
}

View File

@ -218,6 +218,24 @@ export function resolveResponseUsageMode(raw?: string | null): UsageDisplayLevel
return normalizeUsageDisplay(raw) ?? "off";
}
// Normalize fast-mode flags used to toggle low-latency model behavior.
export function normalizeFastMode(raw?: string | boolean | null): boolean | undefined {
if (typeof raw === "boolean") {
return raw;
}
if (!raw) {
return undefined;
}
const key = raw.toLowerCase();
if (["off", "false", "no", "0", "disable", "disabled", "normal"].includes(key)) {
return false;
}
if (["on", "true", "yes", "1", "enable", "enabled", "fast"].includes(key)) {
return true;
}
return undefined;
}
// Normalize elevated flags used to toggle elevated bash permissions.
export function normalizeElevatedLevel(raw?: string | null): ElevatedLevel | undefined {
if (!raw) {

View File

@ -15,6 +15,8 @@ export type AgentStreamParams = {
/** Provider stream params override (best-effort). */
temperature?: number;
maxTokens?: number;
/** Provider fast-mode override (best-effort). */
fastMode?: boolean;
};
export type AgentRunContext = {

View File

@ -36,6 +36,9 @@ const buildFlags = (entry?: SessionEntry): string[] => {
if (typeof verbose === "string" && verbose.length > 0) {
flags.push(`verbose:${verbose}`);
}
if (typeof entry?.fastMode === "boolean") {
flags.push(entry.fastMode ? "fast" : "fast:off");
}
const reasoning = entry?.reasoningLevel;
if (typeof reasoning === "string" && reasoning.length > 0) {
flags.push(`reasoning:${reasoning}`);
@ -170,6 +173,7 @@ export async function getStatusSummary(
updatedAt,
age,
thinkingLevel: entry?.thinkingLevel,
fastMode: entry?.fastMode,
verboseLevel: entry?.verboseLevel,
reasoningLevel: entry?.reasoningLevel,
elevatedLevel: entry?.elevatedLevel,

View File

@ -8,6 +8,7 @@ export type SessionStatus = {
updatedAt: number | null;
age: number | null;
thinkingLevel?: string;
fastMode?: boolean;
verboseLevel?: string;
reasoningLevel?: string;
elevatedLevel?: string;

View File

@ -100,6 +100,7 @@ export type SessionEntry = {
abortCutoffTimestamp?: number;
chatType?: SessionChatType;
thinkingLevel?: string;
fastMode?: boolean;
verboseLevel?: string;
reasoningLevel?: string;
elevatedLevel?: string;

View File

@ -52,6 +52,7 @@ export const SessionsPatchParamsSchema = Type.Object(
key: NonEmptyString,
label: Type.Optional(Type.Union([SessionLabelString, Type.Null()])),
thinkingLevel: Type.Optional(Type.Union([NonEmptyString, Type.Null()])),
fastMode: Type.Optional(Type.Union([Type.Boolean(), Type.Null()])),
verboseLevel: Type.Optional(Type.Union([NonEmptyString, Type.Null()])),
reasoningLevel: Type.Optional(Type.Union([NonEmptyString, Type.Null()])),
responseUsage: Type.Optional(

View File

@ -379,6 +379,7 @@ export const agentHandlers: GatewayRequestHandlers = {
sessionId,
updatedAt: now,
thinkingLevel: entry?.thinkingLevel,
fastMode: entry?.fastMode,
verboseLevel: entry?.verboseLevel,
reasoningLevel: entry?.reasoningLevel,
systemSent: entry?.systemSent,

View File

@ -980,6 +980,7 @@ export const chatHandlers: GatewayRequestHandlers = {
sessionId,
messages: bounded.messages,
thinkingLevel,
fastMode: entry?.fastMode,
verboseLevel,
});
},

View File

@ -166,6 +166,7 @@ async function touchSessionStore(params: {
sessionId: params.sessionId,
updatedAt: params.now,
thinkingLevel: params.entry?.thinkingLevel,
fastMode: params.entry?.fastMode,
verboseLevel: params.entry?.verboseLevel,
reasoningLevel: params.entry?.reasoningLevel,
systemSent: params.entry?.systemSent,

View File

@ -326,6 +326,7 @@ export async function performGatewaySessionReset(params: {
systemSent: false,
abortedLastRun: false,
thinkingLevel: currentEntry?.thinkingLevel,
fastMode: currentEntry?.fastMode,
verboseLevel: currentEntry?.verboseLevel,
reasoningLevel: currentEntry?.reasoningLevel,
responseUsage: currentEntry?.responseUsage,

View File

@ -929,6 +929,7 @@ export function listSessionsFromStore(params: {
systemSent: entry?.systemSent,
abortedLastRun: entry?.abortedLastRun,
thinkingLevel: entry?.thinkingLevel,
fastMode: entry?.fastMode,
verboseLevel: entry?.verboseLevel,
reasoningLevel: entry?.reasoningLevel,
elevatedLevel: entry?.elevatedLevel,

View File

@ -32,6 +32,7 @@ export type GatewaySessionRow = {
systemSent?: boolean;
abortedLastRun?: boolean;
thinkingLevel?: string;
fastMode?: boolean;
verboseLevel?: string;
reasoningLevel?: string;
elevatedLevel?: string;

View File

@ -149,6 +149,37 @@ describe("gateway sessions patch", () => {
expect(entry.reasoningLevel).toBeUndefined();
});
test("persists fastMode=false (does not clear)", async () => {
const entry = expectPatchOk(
await runPatch({
patch: { key: MAIN_SESSION_KEY, fastMode: false },
}),
);
expect(entry.fastMode).toBe(false);
});
test("persists fastMode=true", async () => {
const entry = expectPatchOk(
await runPatch({
patch: { key: MAIN_SESSION_KEY, fastMode: true },
}),
);
expect(entry.fastMode).toBe(true);
});
test("clears fastMode when patch sets null", async () => {
const store: Record<string, SessionEntry> = {
[MAIN_SESSION_KEY]: { fastMode: true } as SessionEntry,
};
const entry = expectPatchOk(
await runPatch({
store,
patch: { key: MAIN_SESSION_KEY, fastMode: null },
}),
);
expect(entry.fastMode).toBeUndefined();
});
test("persists elevatedLevel=off (does not clear)", async () => {
const entry = expectPatchOk(
await runPatch({

View File

@ -11,6 +11,7 @@ import {
formatThinkingLevels,
formatXHighModelHint,
normalizeElevatedLevel,
normalizeFastMode,
normalizeReasoningLevel,
normalizeThinkLevel,
normalizeUsageDisplay,
@ -252,6 +253,19 @@ export async function applySessionsPatchToStore(params: {
}
}
if ("fastMode" in patch) {
const raw = patch.fastMode;
if (raw === null) {
delete next.fastMode;
} else if (raw !== undefined) {
const normalized = normalizeFastMode(raw);
if (normalized === undefined) {
return invalid("invalid fastMode (use true or false)");
}
next.fastMode = normalized;
}
}
if ("verboseLevel" in patch) {
const raw = patch.verboseLevel;
const parsed = parseVerboseOverride(raw);

View File

@ -4,6 +4,7 @@ import { formatThinkingLevels, listThinkingLevelLabels } from "../auto-reply/thi
import type { OpenClawConfig } from "../config/types.js";
const VERBOSE_LEVELS = ["on", "off"];
const FAST_LEVELS = ["status", "on", "off"];
const REASONING_LEVELS = ["on", "off"];
const ELEVATED_LEVELS = ["on", "off", "ask", "full"];
const ACTIVATION_LEVELS = ["mention", "always"];
@ -52,6 +53,7 @@ export function parseCommand(input: string): ParsedCommand {
export function getSlashCommands(options: SlashCommandOptions = {}): SlashCommand[] {
const thinkLevels = listThinkingLevelLabels(options.provider, options.model);
const verboseCompletions = createLevelCompletion(VERBOSE_LEVELS);
const fastCompletions = createLevelCompletion(FAST_LEVELS);
const reasoningCompletions = createLevelCompletion(REASONING_LEVELS);
const usageCompletions = createLevelCompletion(USAGE_FOOTER_LEVELS);
const elevatedCompletions = createLevelCompletion(ELEVATED_LEVELS);
@ -76,6 +78,11 @@ export function getSlashCommands(options: SlashCommandOptions = {}): SlashComman
.filter((v) => v.startsWith(prefix.toLowerCase()))
.map((value) => ({ value, label: value })),
},
{
name: "fast",
description: "Set fast mode on/off",
getArgumentCompletions: fastCompletions,
},
{
name: "verbose",
description: "Set verbose on/off",
@ -142,6 +149,7 @@ export function helpText(options: SlashCommandOptions = {}): string {
"/session <key> (or /sessions)",
"/model <provider/model> (or /models)",
`/think <${thinkLevels}>`,
"/fast <status|on|off>",
"/verbose <on|off>",
"/reasoning <on|off>",
"/usage <off|tokens|full>",

View File

@ -79,6 +79,7 @@ export type GatewaySessionList = {
Pick<
SessionInfo,
| "thinkingLevel"
| "fastMode"
| "verboseLevel"
| "reasoningLevel"
| "model"
@ -92,6 +93,7 @@ export type GatewaySessionList = {
key: string;
sessionId?: string;
updatedAt?: number | null;
fastMode?: boolean;
sendPolicy?: string;
responseUsage?: ResponseUsageMode;
label?: string;

View File

@ -345,6 +345,27 @@ export function createCommandHandlers(context: CommandHandlerContext) {
chatLog.addSystem(`verbose failed: ${String(err)}`);
}
break;
case "fast":
if (!args || args === "status") {
chatLog.addSystem(`fast mode: ${state.sessionInfo.fastMode ? "on" : "off"}`);
break;
}
if (args !== "on" && args !== "off") {
chatLog.addSystem("usage: /fast <status|on|off>");
break;
}
try {
const result = await client.patchSession({
key: state.currentSessionKey,
fastMode: args === "on",
});
chatLog.addSystem(`fast mode ${args === "on" ? "enabled" : "disabled"}`);
applySessionInfoFromPatch(result);
await refreshSessionInfo();
} catch (err) {
chatLog.addSystem(`fast failed: ${String(err)}`);
}
break;
case "reasoning":
if (!args) {
chatLog.addSystem("usage: /reasoning <on|off>");

View File

@ -165,6 +165,9 @@ export function createSessionActions(context: SessionActionContext) {
if (entry?.thinkingLevel !== undefined) {
next.thinkingLevel = entry.thinkingLevel;
}
if (entry?.fastMode !== undefined) {
next.fastMode = entry.fastMode;
}
if (entry?.verboseLevel !== undefined) {
next.verboseLevel = entry.verboseLevel;
}
@ -286,10 +289,12 @@ export function createSessionActions(context: SessionActionContext) {
messages?: unknown[];
sessionId?: string;
thinkingLevel?: string;
fastMode?: boolean;
verboseLevel?: string;
};
state.currentSessionId = typeof record.sessionId === "string" ? record.sessionId : null;
state.sessionInfo.thinkingLevel = record.thinkingLevel ?? state.sessionInfo.thinkingLevel;
state.sessionInfo.fastMode = record.fastMode ?? state.sessionInfo.fastMode;
state.sessionInfo.verboseLevel = record.verboseLevel ?? state.sessionInfo.verboseLevel;
const showTools = (state.sessionInfo.verboseLevel ?? "off") !== "off";
chatLog.clearAll();

View File

@ -28,6 +28,7 @@ export type ResponseUsageMode = "on" | "off" | "tokens" | "full";
export type SessionInfo = {
thinkingLevel?: string;
fastMode?: boolean;
verboseLevel?: string;
reasoningLevel?: string;
model?: string;

View File

@ -752,6 +752,7 @@ export async function runTui(opts: TuiOptions) {
: "unknown";
const tokens = formatTokens(sessionInfo.totalTokens ?? null, sessionInfo.contextTokens ?? null);
const think = sessionInfo.thinkingLevel ?? "off";
const fast = sessionInfo.fastMode === true;
const verbose = sessionInfo.verboseLevel ?? "off";
const reasoning = sessionInfo.reasoningLevel ?? "off";
const reasoningLabel =
@ -761,6 +762,7 @@ export async function runTui(opts: TuiOptions) {
`session ${sessionLabel}`,
modelLabel,
think !== "off" ? `think ${think}` : null,
fast ? "fast" : null,
verbose !== "off" ? `verbose ${verbose}` : null,
reasoningLabel,
tokens,

View File

@ -378,4 +378,42 @@ describe("executeSlashCommand directives", () => {
expect(result.content).toBe("Current verbose level: full.\nOptions: on, full, off.");
expect(request).toHaveBeenNthCalledWith(1, "sessions.list", {});
});
it("reports the current fast mode for bare /fast", async () => {
const request = vi.fn(async (method: string, _payload?: unknown) => {
if (method === "sessions.list") {
return {
sessions: [row("agent:main:main", { fastMode: true })],
};
}
throw new Error(`unexpected method: ${method}`);
});
const result = await executeSlashCommand(
{ request } as unknown as GatewayBrowserClient,
"agent:main:main",
"fast",
"",
);
expect(result.content).toBe("Current fast mode: on.\nOptions: status, on, off.");
expect(request).toHaveBeenNthCalledWith(1, "sessions.list", {});
});
it("patches fast mode for /fast on", async () => {
const request = vi.fn().mockResolvedValue({ ok: true });
const result = await executeSlashCommand(
{ request } as unknown as GatewayBrowserClient,
"agent:main:main",
"fast",
"on",
);
expect(result.content).toBe("Fast mode enabled.");
expect(request).toHaveBeenCalledWith("sessions.patch", {
key: "agent:main:main",
fastMode: true,
});
});
});

View File

@ -63,6 +63,8 @@ export async function executeSlashCommand(
return await executeModel(client, sessionKey, args);
case "think":
return await executeThink(client, sessionKey, args);
case "fast":
return await executeFast(client, sessionKey, args);
case "verbose":
return await executeVerbose(client, sessionKey, args);
case "export":
@ -252,6 +254,44 @@ async function executeVerbose(
}
}
async function executeFast(
client: GatewayBrowserClient,
sessionKey: string,
args: string,
): Promise<SlashCommandResult> {
const rawMode = args.trim().toLowerCase();
if (!rawMode || rawMode === "status") {
try {
const session = await loadCurrentSession(client, sessionKey);
return {
content: formatDirectiveOptions(
`Current fast mode: ${resolveCurrentFastMode(session)}.`,
"status, on, off",
),
};
} catch (err) {
return { content: `Failed to get fast mode: ${String(err)}` };
}
}
if (rawMode !== "on" && rawMode !== "off") {
return {
content: `Unrecognized fast mode "${args.trim()}". Valid levels: status, on, off.`,
};
}
try {
await client.request("sessions.patch", { key: sessionKey, fastMode: rawMode === "on" });
return {
content: `Fast mode ${rawMode === "on" ? "enabled" : "disabled"}.`,
action: "refresh",
};
} catch (err) {
return { content: `Failed to set fast mode: ${String(err)}` };
}
}
async function executeUsage(
client: GatewayBrowserClient,
sessionKey: string,
@ -534,6 +574,10 @@ function resolveCurrentThinkingLevel(
});
}
function resolveCurrentFastMode(session: GatewaySessionRow | undefined): "on" | "off" {
return session?.fastMode === true ? "on" : "off";
}
function fmtTokens(n: number): string {
if (n >= 1_000_000) {
return `${(n / 1_000_000).toFixed(1).replace(/\.0$/, "")}M`;

View File

@ -23,4 +23,11 @@ describe("parseSlashCommand", () => {
args: "full",
});
});
it("parses fast commands", () => {
expect(parseSlashCommand("/fast:on")).toMatchObject({
command: { name: "fast" },
args: "on",
});
});
});

View File

@ -88,6 +88,15 @@ export const SLASH_COMMANDS: SlashCommandDef[] = [
executeLocal: true,
argOptions: ["on", "off", "full"],
},
{
name: "fast",
description: "Toggle fast mode",
args: "<status|on|off>",
icon: "zap",
category: "model",
executeLocal: true,
argOptions: ["status", "on", "off"],
},
// ── Tools ──
{

View File

@ -63,6 +63,7 @@ export async function patchSession(
patch: {
label?: string | null;
thinkingLevel?: string | null;
fastMode?: boolean | null;
verboseLevel?: string | null;
reasoningLevel?: string | null;
},
@ -77,6 +78,9 @@ export async function patchSession(
if ("thinkingLevel" in patch) {
params.thinkingLevel = patch.thinkingLevel;
}
if ("fastMode" in patch) {
params.fastMode = patch.fastMode;
}
if ("verboseLevel" in patch) {
params.verboseLevel = patch.verboseLevel;
}

View File

@ -379,6 +379,7 @@ export type GatewaySessionRow = {
systemSent?: boolean;
abortedLastRun?: boolean;
thinkingLevel?: string;
fastMode?: boolean;
verboseLevel?: string;
reasoningLevel?: string;
elevatedLevel?: string;
@ -396,6 +397,7 @@ export type SessionsPatchResult = SessionsPatchResultBase<{
sessionId: string;
updatedAt?: number;
thinkingLevel?: string;
fastMode?: boolean;
verboseLevel?: string;
reasoningLevel?: string;
elevatedLevel?: string;

View File

@ -60,7 +60,7 @@ describe("sessions view", () => {
await Promise.resolve();
const selects = container.querySelectorAll("select");
const verbose = selects[1] as HTMLSelectElement | undefined;
const verbose = selects[2] as HTMLSelectElement | undefined;
expect(verbose?.value).toBe("full");
expect(Array.from(verbose?.options ?? []).some((option) => option.value === "full")).toBe(true);
});
@ -83,10 +83,32 @@ describe("sessions view", () => {
await Promise.resolve();
const selects = container.querySelectorAll("select");
const reasoning = selects[2] as HTMLSelectElement | undefined;
const reasoning = selects[3] as HTMLSelectElement | undefined;
expect(reasoning?.value).toBe("custom-mode");
expect(
Array.from(reasoning?.options ?? []).some((option) => option.value === "custom-mode"),
).toBe(true);
});
it("renders explicit fast mode without falling back to inherit", async () => {
const container = document.createElement("div");
render(
renderSessions(
buildProps(
buildResult({
key: "agent:main:main",
kind: "direct",
updatedAt: Date.now(),
fastMode: true,
}),
),
),
container,
);
await Promise.resolve();
const selects = container.querySelectorAll("select");
const fast = selects[1] as HTMLSelectElement | undefined;
expect(fast?.value).toBe("on");
});
});

View File

@ -37,6 +37,7 @@ export type SessionsProps = {
patch: {
label?: string | null;
thinkingLevel?: string | null;
fastMode?: boolean | null;
verboseLevel?: string | null;
reasoningLevel?: string | null;
},
@ -52,6 +53,11 @@ const VERBOSE_LEVELS = [
{ value: "on", label: "on" },
{ value: "full", label: "full" },
] as const;
const FAST_LEVELS = [
{ value: "", label: "inherit" },
{ value: "on", label: "on" },
{ value: "off", label: "off" },
] as const;
const REASONING_LEVELS = ["", "off", "on", "stream"] as const;
const PAGE_SIZES = [10, 25, 50, 100] as const;
@ -306,6 +312,7 @@ export function renderSessions(props: SessionsProps) {
${sortHeader("updated", "Updated")}
${sortHeader("tokens", "Tokens")}
<th>Thinking</th>
<th>Fast</th>
<th>Verbose</th>
<th>Reasoning</th>
<th style="width: 60px;"></th>
@ -316,7 +323,7 @@ export function renderSessions(props: SessionsProps) {
paginated.length === 0
? html`
<tr>
<td colspan="9" style="text-align: center; padding: 48px 16px; color: var(--muted)">
<td colspan="10" style="text-align: center; padding: 48px 16px; color: var(--muted)">
No sessions found.
</td>
</tr>
@ -390,6 +397,8 @@ function renderRow(
const isBinaryThinking = isBinaryThinkingProvider(row.modelProvider);
const thinking = resolveThinkLevelDisplay(rawThinking, isBinaryThinking);
const thinkLevels = withCurrentOption(resolveThinkLevelOptions(row.modelProvider), thinking);
const fastMode = row.fastMode === true ? "on" : row.fastMode === false ? "off" : "";
const fastLevels = withCurrentLabeledOption(FAST_LEVELS, fastMode);
const verbose = row.verboseLevel ?? "";
const verboseLevels = withCurrentLabeledOption(VERBOSE_LEVELS, verbose);
const reasoning = row.reasoningLevel ?? "";
@ -465,6 +474,23 @@ function renderRow(
)}
</select>
</td>
<td>
<select
?disabled=${disabled}
style="padding: 6px 10px; font-size: 13px; border: 1px solid var(--border); border-radius: var(--radius-sm); min-width: 90px;"
@change=${(e: Event) => {
const value = (e.target as HTMLSelectElement).value;
onPatch(row.key, { fastMode: value === "" ? null : value === "on" });
}}
>
${fastLevels.map(
(level) =>
html`<option value=${level.value} ?selected=${fastMode === level.value}>
${level.label}
</option>`,
)}
</select>
</td>
<td>
<select
?disabled=${disabled}