llm-streaming

Installation
SKILL.md

LLM Streaming

Deliver LLM responses in real-time for better UX.

Basic Streaming (OpenAI)

from openai import OpenAI

client = OpenAI()

async def stream_response(prompt: str):
    """Stream tokens as they're generated."""
    stream = client.chat.completions.create(
        model="gpt-5.2",
        messages=[{"role": "user", "content": prompt}],
        stream=True
    )
Related skills

More from yonatangross/orchestkit

Installs
12
GitHub Stars
170
First Seen
Jan 22, 2026