- Remove unused PluginRuntime import, consolidate import lines
- Bump @mariozechner/pi-ai from 0.55.3 to 0.58.0 to match root
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove the LLM-based standingInstructions and availableSkills extraction
pipeline. Instead, cache the main agent's full system prompt on the first
llm_input and pass it as-is to the guardian as "Agent context".
This eliminates two async LLM calls per session, simplifies the codebase
(~340 lines removed), and gives the guardian MORE context (the complete
system prompt including tool definitions, memory, and skills) rather than
a lossy LLM-extracted summary.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Guardian intercepts tool calls via before_tool_call hook and sends them
to a separate LLM for review — blocks actions the user never requested,
defending against prompt injection attacks.
Key design decisions:
- Conversation turns (user + assistant pairs) give guardian context to
understand confirmations like "yes" / "go ahead"
- Assistant replies are explicitly marked as untrusted in the prompt to
prevent poisoning attacks from propagating
- Provider resolution uses SDK (not hardcoded list) with 3-layer
fallback: explicit config → models.json → pi-ai built-in database
- Lazy resolution pattern for async provider/auth lookup in sync register()
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>