effect-uai-agentic-loop
effect-uai agentic-loop
Long-lived chat agent. Drains a user-message queue between cleanly- finished turns; bursts of typing get coalesced into one batch via a debounce window; tool-call turns run straight through to the next iteration without checking for new user input.
Reach for this when the user says any of:
- "Long-lived chat agent with a queue / WebSocket"
- "Coalesce burst messages into one user turn"
- "Interactive CLI agent that keeps running"
The design move
The whole loop turns on one question at the top of each iteration:
Does the model need fresh user input, or does it still owe us a response?
More from betalyra/effect-uai
effect-uai
Use when building AI agents with effect-uai (Effect-based primitives for agent loops, items/turns, tools, streaming, structured output, multi-provider). Covers the design philosophy, the core primitives, provider wiring (OpenAI Responses, Anthropic, Google Gemini), and a catalog of recipe patterns (retry, multi-model fallback, tool approval, auto-compaction, streaming SSE/JSONL, etc.) the user can compose into their own agent.
2effect-uai-basic-usage
Use when the user wants the canonical effect-uai agent loop — stream a model turn, run any tools the model asks for, append outputs, continue until the model produces a final answer. The starting shape every other recipe is a variation of.
2effect-uai-model-retry
Use when the user wants to retry transient model failures (rate limits, transport hiccups, timeouts) with exponential backoff, while letting non-retryable failures (content filtered, auth, invalid request, context length) propagate immediately. Inline pipeline with no helper service.
2effect-uai-model-council
Use when the user wants three (or more) models to answer the same question, score each other's answers (no self-judging), and emit a winner — e.g. consensus voting, audit, automated quality picking. Pure Stream composition; no Queue, no Deferred, no manual forks.
2effect-uai-pause-resume
Use when the user wants to soft-pause the agent loop between turns (no provider call held open) and resume later — e.g. cooldown after rate-limit, manual UI pause button, scheduled gating. State threads through naturally; resume picks up exactly where pause left off, no checkpointing needed.
2effect-uai-auto-compaction
Use when the user is worried about long conversations exceeding the context window or the input-token budget — summarize earlier history into one item once a turn / token threshold is crossed, keep the last few items verbatim, then continue. The compaction step is just another streamTurn; history is just state.
2