eval-suite

Installation
SKILL.md

/dm:eval-suite

Purpose

Batch evaluation across multiple content pieces to produce a portfolio-level quality assessment. Evaluate an entire content library, all assets in a campaign, or a set of deliverables in one run. Instead of evaluating content one piece at a time, this command processes everything together and delivers a holistic view of content quality.

The output includes content rankings, per-dimension analysis, overall quality distribution, common issues across the set, and a prioritized revision list. This is the command to use before a campaign launch (to catch weak assets before they go live), during a content audit (to assess library health), or after a production sprint (to quality-check all deliverables at once). Every evaluation is logged to the quality tracker for longitudinal trend analysis.

Input Required

The user must provide (or will be prompted for):

  • Content sources: One or more of the following:
    • A list of file paths (e.g., "evaluate these 5 files: email-v1.txt, email-v2.txt, landing-page.html, ad-copy-fb.txt, ad-copy-google.txt")
    • A directory path (e.g., "evaluate everything in /campaign-q1-assets/") — all text-based files in the directory will be included
    • Multiple inline content blocks with labels (e.g., "Evaluate these: [Label: Homepage Hero] content... [Label: Email Subject] content...")
  • Content type: Optional — applied globally (e.g., "these are all email subject lines") or specified per item. If omitted, the evaluator will infer type from content characteristics
  • Evidence file: Optional — shared context document (brief, strategy doc, audience research) applied across all evaluations for more relevant scoring
  • Evaluation depth: Optional — quick (default, faster per-item evaluation) or full (comprehensive evaluation with detailed per-dimension commentary per item). Quick is recommended for sets larger than 10 items; full for critical campaign assets
Related skills
Installs
31
GitHub Stars
100
First Seen
Feb 27, 2026