calibrate
Calibrate - Post-Launch AI Feature Calibration
Core Philosophy
Calibration happens after launch, not before.
The mistake: Building elaborate systems to perfectly calibrate AI behavior before launch. The reality: You learn what quality means by shipping to users and seeing what they actually need.
The Calibration Loop:
- Deploy at current agency level
- Monitor performance in prod
- Analyze and learn
- Calibrate system
- Test changes
- Consider agency increase
- Repeat
More from breethomas/pm-thought-partner
agent-workflow
Expert system for designing and architecting AI agent workflows based on proven Meta methodologies. Use when users need to build AI agents, create agent workflows, solve problems using agentic systems, integrate multiple tools into agent architectures, or need guidance on agent design patterns. Helps translate business problems into structured agent solutions with clear scope, tool integration, and multi-layer architecture planning.
10context-engineering
[ARCHIVED] Full 4D Context Canvas reference. For new AI features, use /spec --ai. For debugging, use /ai-debug. For quality checks, use /context-check.
6spec
Write specifications at the right depth for any project. Progressive disclosure from quick Linear issues to full AI feature specs. Embeds Linear Method philosophy (brevity, clarity, momentum) with context engineering for AI features. Use for any spec work - quick tasks, features, or AI products.
2competitive-research
Systematic competitive intelligence with parallel agent analysis. Analyzes competitors thoroughly and synthesizes into actionable insights.
2pmf-survey
Create and analyze a PMF survey using Rahul Vohra's Superhuman framework. The magic 40% benchmark for product-market fit.
2four-risks
Run Marty Cagan's Four Risks assessment on an issue (value, usability, feasibility, viability). Use when evaluating features before building.
2