confidence-calibration
Confidence Calibration Framework
When This Activates
This skill activates when:
- Expressing uncertainty about a suggestion
- Working in a domain with past errors
- User asks "how confident are you?"
- Making predictions or recommendations
Domain Tracking
The system tracks prediction accuracy across domains:
More from jamelna-apps/claude-dash
cost-tracking
When user mentions "spending", "usage", "tokens", "API cost", "budget", "expensive", or wants to understand Claude API costs. Provides cost awareness and optimization guidance.
11page-cro
When the user mentions "conversion", "CRO", "landing page", "not converting", "bounce rate", "optimize page", or asks about improving page performance.
7session-handoff
When user says "continue", "pick up where we left off", "last time", "previous session", "what were we doing", or wants explicit session continuity. Provides structured context handoff between sessions.
4error-diagnosis
When user encounters "error", "exception", "failed", "stack trace", "crashed", or needs error categorization. Provides structured root cause analysis and prevention strategies.
4code-review
When the user mentions "review", "PR", "pull request", "code review", "check my code", "feedback on", or asks for code quality assessment.
3smart-routing
When deciding which Claude model (Opus/Sonnet/Haiku) to use, or when "route", "which model", "complex task", "multi-file", "architectural", or "deep debugging" is mentioned. Guides quality-first model selection.
3