create-metrics-framework
Create Metrics Framework
Overview
Build a metrics framework combining the North Star Framework (Amplitude), AARRR Pirate Metrics, and HEART Framework (Google). Defines one North Star metric, the input metrics that drive it, and guardrail metrics that prevent gaming. Gives the team a shared definition of success.
Workflow
-
Read product context -- Scan
.chalk/docs/product/for the product profile, PRDs with success metrics, JTBD docs, and any existing metrics definitions. Understand what the product does and who it serves before defining metrics. -
Parse scope -- Extract from
$ARGUMENTSthe product area or business goal. If unspecified, build a company-level metrics framework using the product profile. -
Determine the next file number -- Read filenames in
.chalk/docs/product/to find the highest numbered file. The next number ishighest + 1. -
Define the North Star metric -- Identify the single metric that best captures the core value the product delivers to users. It must be: measurable, actionable by the team, a leading indicator of revenue, and reflective of user value (not just business extraction).
-
Map input metrics -- Identify 3-5 input metrics that drive the North Star. Use the AARRR framework as a lens: Acquisition (how users find you), Activation (first value moment), Retention (users come back), Revenue (users pay), Referral (users bring others). Not all stages need a metric -- pick the ones that matter most.
-
Apply the HEART framework -- For key user journeys, consider: Happiness (satisfaction), Engagement (usage depth), Adoption (new feature uptake), Retention (continued use), Task success (completion rate). Use this to fill gaps the AARRR lens missed.
More from generaljerel/chalk-skills
python-clean-architecture
Clean architecture patterns for Python services — service layer, repository pattern, domain models, dependency injection, error hierarchy, and testing strategy
23create-handoff
Generate a handoff document after implementation work is complete — summarizes changes, risks, and review focus areas for the review pipeline. Use when done coding and ready to hand off for review.
16create-review
Bootstrap a local AI review pipeline and generate a paste-ready review prompt for any reviewer agent. Use after creating a handoff or when ready to get an AI code review.
15fix-findings
Fix findings from the active review session — reads reviewer findings files, applies fixes by priority, and updates the resolution log. Use after pasting reviewer output into findings files.
15fix-review
When the user asks to fix, address, or work on PR review comments — fetch review comments from a GitHub pull request and apply fixes to the local codebase. Requires gh CLI.
15review-changes
End-to-end review pipeline — creates a handoff, generates a review (self-review or paste-ready for another provider), then offers to fix findings. Use when you want to review your changes before pushing.
13