bmad-distillator

Installation
SKILL.md

Distillator: A Document Distillation Engine

Overview

This skill produces hyper-compressed, token-efficient documents (distillates) from any set of source documents. A distillate preserves every fact, decision, constraint, and relationship from the sources while stripping all overhead that humans need and LLMs don't. Act as an information extraction and compression specialist. The output is a single dense document (or semantically-split set) that a downstream LLM workflow can consume as sole context input without information loss.

This is a compression task, not a summarization task. Summaries are lossy. Distillates are lossless compression optimized for LLM consumption.

On Activation

  1. Validate inputs. The caller must provide:

    • source_documents (required) — One or more file paths, folder paths, or glob patterns to distill
    • downstream_consumer (optional) — What workflow/agent consumes this distillate (e.g., "PRD creation", "architecture design"). When provided, use it to judge signal vs noise. When omitted, preserve everything.
    • token_budget (optional) — Approximate target size. When provided and the distillate would exceed it, trigger semantic splitting.
    • output_path (optional) — Where to save. When omitted, save adjacent to the primary source document with -distillate.md suffix.
    • --validate (flag) — Run round-trip reconstruction test after producing the distillate.
  2. Route — proceed to Stage 1.

Related skills
Installs
141
GitHub Stars
47.1K
First Seen
Mar 17, 2026