session-feedback-analyzer

Installation
SKILL.md

Session Feedback Analyzer

Mines Claude Code session JSONL for implicit user feedback. When a user corrects, redoes, reverts, or partially accepts AI output after a skill invocation, that signals a skill gap. Outputs structured feedback.jsonl with per-event dimension attribution for the improvement pipeline.

When to Use

  • Compute per-skill correction rates to find which skills users correct most often.
  • Generate feedback.jsonl as input for improvement-generator's candidate prioritization.
  • Track correction trends over time (30-day rolling windows) to detect skill quality regression.
  • Identify hotspot dimensions (accuracy vs coverage vs trigger_quality vs efficiency) per skill.
  • Compare correction_rate before and after an improvement to validate whether a change actually helped.
  • Audit a single skill's feedback history with --skill-filter to understand why users reject its output.
  • Feed dimension hotspots into improvement-generator so candidates target the dimensions users care about.
  • Bootstrap the auto-improvement loop: analyzer output is the starting signal that tells the pipeline which skills need work.
  • Investigate spikes in correction_rate after a skill update to decide whether to rollback.

When NOT to Use

  • Synthetic task evaluation against a predefined task suite -- use improvement-evaluator instead.
Related skills

More from lanyasheng/auto-improvement-orchestrator-skill

Installs
1
GitHub Stars
5
First Seen
Apr 8, 2026