mirror of https://github.com/openclaw/openclaw.git
Remove the LLM-based standingInstructions and availableSkills extraction pipeline. Instead, cache the main agent's full system prompt on the first llm_input and pass it as-is to the guardian as "Agent context". This eliminates two async LLM calls per session, simplifies the codebase (~340 lines removed), and gives the guardian MORE context (the complete system prompt including tool definitions, memory, and skills) rather than a lossy LLM-extracted summary. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| directory.ts | ||
| plugin-api.ts | ||
| plugin-runtime-mock.ts | ||
| runtime-env.ts | ||
| send-config.ts | ||
| start-account-context.ts | ||
| start-account-lifecycle.ts | ||
| status-issues.ts | ||