mirror of https://github.com/openclaw/openclaw.git
Adds support for Slack's Agents & AI Apps text streaming APIs (chat.startStream, chat.appendStream, chat.stopStream) to deliver LLM responses as a single updating message instead of separate messages per block. Changes: - New src/slack/streaming.ts with stream lifecycle helpers using the SDK's ChatStreamer (client.chatStream()) - New 'streaming' config option on SlackAccountConfig - Updated dispatch.ts to route block replies through the stream when enabled, with graceful fallback to normal delivery - Docs in docs/channels/slack.md covering setup and requirements The streaming integration works by intercepting the deliver callback in the reply dispatcher. When streaming is enabled and a thread context exists, the first text delivery starts a stream, subsequent deliveries append to it, and the stream is finalized after dispatch completes. Media payloads and error cases fall back to normal message delivery. Refs: - https://docs.slack.dev/ai/developing-ai-apps#streaming - https://docs.slack.dev/reference/methods/chat.startStream - https://docs.slack.dev/reference/methods/chat.appendStream - https://docs.slack.dev/reference/methods/chat.stopStream |
||
|---|---|---|
| .. | ||
| bluebubbles.md | ||
| discord.md | ||
| feishu.md | ||
| googlechat.md | ||
| grammy.md | ||
| imessage.md | ||
| index.md | ||
| line.md | ||
| location.md | ||
| matrix.md | ||
| mattermost.md | ||
| msteams.md | ||
| nextcloud-talk.md | ||
| nostr.md | ||
| signal.md | ||
| slack.md | ||
| telegram.md | ||
| tlon.md | ||
| troubleshooting.md | ||
| twitch.md | ||
| whatsapp.md | ||
| zalo.md | ||
| zalouser.md | ||