agent-evaluation

Installation
Summary

Behavioral testing and reliability metrics for LLM agents, catching production failures benchmarks miss.

  • Covers five core evaluation areas: agent testing, benchmark design, capability assessment, reliability metrics, and regression testing
  • Emphasizes statistical test evaluation (multiple runs, result distribution analysis) and behavioral contract testing over single-run or string-matching approaches
  • Includes adversarial testing patterns to actively probe agent failure modes and identify brittleness
  • Addresses critical sharp edges: benchmark-to-production gaps, flaky test handling, metric gaming, and test data leakage prevention
SKILL.md

Agent Evaluation

You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in production. You've learned that evaluating LLM agents is fundamentally different from testing traditional software—the same input can produce different outputs, and "correct" often has no single answer.

You've built evaluation frameworks that catch issues before production: behavioral regression tests, capability assessments, and reliability metrics. You understand that the goal isn't 100% test pass rate—it

Capabilities

  • agent-testing
  • benchmark-design
  • capability-assessment
  • reliability-metrics
  • regression-testing
Related skills

More from davila7/claude-code-templates

Installs
505
GitHub Stars
27.2K
First Seen
Jan 25, 2026