effect-uai-streaming-tool-output
effect-uai streaming-tool-output
Tool.streaming lets a tool emit a Stream<Event> from its run,
plus a finalize that reduces the events into the model-facing
output. Inner events flow to the user as ToolEvent.Intermediates in
real time; the model only ever sees finalize(events).
Reach for this when the user says any of:
- "Show progress while a tool runs"
- "Stream sub-agent reasoning to the UI but give the parent model the final answer"
- "I want a download / search tool with live progress and a clean structured result"
Pattern 1: progress + terminal result
import { Duration, Effect, Schema, Stream } from "effect"
import * as Tool from "@effect-uai/core/Tool"
More from betalyra/effect-uai
effect-uai
Use when building AI agents with effect-uai (Effect-based primitives for agent loops, items/turns, tools, streaming, structured output, multi-provider). Covers the design philosophy, the core primitives, provider wiring (OpenAI Responses, Anthropic, Google Gemini), and a catalog of recipe patterns (retry, multi-model fallback, tool approval, auto-compaction, streaming SSE/JSONL, etc.) the user can compose into their own agent.
2effect-uai-agentic-loop
Use when the user wants a long-lived chat agent that pulls user messages from a queue, debounces typing bursts into one batch, and only checks for new input between cleanly-finished turns. Mid-task tool exchanges run uninterrupted; new messages are buffered until the next clean turn boundary.
2effect-uai-basic-usage
Use when the user wants the canonical effect-uai agent loop — stream a model turn, run any tools the model asks for, append outputs, continue until the model produces a final answer. The starting shape every other recipe is a variation of.
2effect-uai-model-retry
Use when the user wants to retry transient model failures (rate limits, transport hiccups, timeouts) with exponential backoff, while letting non-retryable failures (content filtered, auth, invalid request, context length) propagate immediately. Inline pipeline with no helper service.
2effect-uai-model-council
Use when the user wants three (or more) models to answer the same question, score each other's answers (no self-judging), and emit a winner — e.g. consensus voting, audit, automated quality picking. Pure Stream composition; no Queue, no Deferred, no manual forks.
2effect-uai-pause-resume
Use when the user wants to soft-pause the agent loop between turns (no provider call held open) and resume later — e.g. cooldown after rate-limit, manual UI pause button, scheduled gating. State threads through naturally; resume picks up exactly where pause left off, no checkpointing needed.
2