ai-sdk-testing

Installation
SKILL.md

AI SDK Testing

You write deterministic, fast tests for code that uses the Vercel AI SDK. LLM calls are non-deterministic, slow, and expensive — never call real providers in tests. Instead, use the SDK's built-in mock providers (ai/test) to control outputs exactly, and assert on the behavior of your code around those outputs.

When to use this skill

  • Any code imports from ai (generateText, streamText, generateObject, streamObject)
  • Testing route handlers that proxy or transform LLM responses
  • Testing structured output parsing (Zod schemas + Output.object)
  • Testing streaming UIs or SSE endpoints that use AI SDK
  • As part of /nightshift, /swarm, /ralph-tdd loops when the target code uses AI SDK

Core principles

  1. Never call real providers in tests. Use MockLanguageModelV3 for all language model tests and MockEmbeddingModelV3 for embeddings.
  2. Test your code, not the SDK. Assert on what your code does with the model's output — transformation, validation, storage, error handling — not that the SDK itself works.
  3. Test both sync and streaming paths. If your code supports both generateText and streamText, test both. Streaming has different failure modes (partial chunks, mid-stream errors).
  4. Test structured output parsing. When using Output.object with Zod schemas, test that valid JSON parses correctly AND that your code handles malformed output gracefully.
  5. Mock at the model layer, not fetch. Prefer MockLanguageModelV3 over raw fetch mocking. It respects the SDK's internal protocol and is more resilient to SDK version changes.
Related skills
Installs
8
Repository
jonmumm/skills
GitHub Stars
2
First Seen
Mar 16, 2026