advanced-evaluation
Advanced Evaluation
This skill covers production-grade techniques for evaluating LLM outputs using LLMs as judges. It synthesizes research from academic papers, industry practices, and practical implementation experience into actionable patterns for building reliable evaluation systems.
Key insight: LLM-as-a-Judge is not a single technique but a family of approaches, each suited to different evaluation contexts. Choosing the right approach and mitigating known biases is the core competency this skill develops.
When to Activate
Activate this skill when:
- Building automated evaluation pipelines for LLM outputs
- Comparing multiple model responses to select the best one
- Establishing consistent quality standards across evaluation teams
- Debugging evaluation systems that show inconsistent results
- Designing A/B tests for prompt or model changes
- Creating rubrics for human or automated evaluation
- Analyzing correlation between automated and human judgments
Core Concepts
More from flora131/atomic
research-codebase
Document codebase as-is with research directory for historical context
180explain-code
Explain code functionality in detail.
176prompt-engineer
Create, improve, or optimize prompts using best practices
170gh-create-pr
Commit unstaged changes, push changes, submit a pull request.
169gh-commit
Create well-formatted commits with conventional commit format.
168context-compression
This skill should be used when the user asks to "compress context", "summarize conversation history", "implement compaction", "reduce token usage", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits. A core context engineering skill — also activates when the user mentions "context engineering" or "context-engineering" in the context of managing token budgets and session longevity.
168