analyze-competitors
Analyze Competitors
Overview
Generate a competitive positioning matrix and Blue Ocean Strategy canvas that compares your product against named competitors across key dimensions: features, pricing, target market, UX, strengths, and weaknesses. Surfaces differentiation opportunities and strategic white space.
Workflow
-
Read product context -- Scan
.chalk/docs/product/for the product profile (0_product_profile.md), JTBD docs, and any existing competitive analyses. If no product context exists, ask the user to describe their product before proceeding. -
Parse competitors and dimensions -- Extract from
$ARGUMENTSthe competitor names and any specific dimensions the user wants compared. If no competitors are named, ask the user to list 2-5 direct competitors. If no dimensions are specified, use the defaults: core features, pricing model, target market, UX quality, integration ecosystem, go-to-market approach. -
Determine the next file number -- Read filenames in
.chalk/docs/product/to find the highest numbered file. The next number ishighest + 1. -
Build the competitive matrix -- For each competitor, analyze across every dimension. Use a consistent rating or description per cell. Be specific -- "freemium with $29/mo pro tier" not "has free plan."
-
Create the strategy canvas -- Describe a Blue Ocean Strategy canvas: list the competing factors on the X-axis and value level (low to high) on the Y-axis. Plot your product and each competitor. Identify factors where you can eliminate, reduce, raise, or create to find uncontested market space.
-
Identify positioning insights -- Summarize: where you are differentiated, where you are at parity, where competitors have an advantage, and where white space exists.
More from generaljerel/chalk-skills
python-clean-architecture
Clean architecture patterns for Python services — service layer, repository pattern, domain models, dependency injection, error hierarchy, and testing strategy
23create-handoff
Generate a handoff document after implementation work is complete — summarizes changes, risks, and review focus areas for the review pipeline. Use when done coding and ready to hand off for review.
16create-review
Bootstrap a local AI review pipeline and generate a paste-ready review prompt for any reviewer agent. Use after creating a handoff or when ready to get an AI code review.
15fix-findings
Fix findings from the active review session — reads reviewer findings files, applies fixes by priority, and updates the resolution log. Use after pasting reviewer output into findings files.
15fix-review
When the user asks to fix, address, or work on PR review comments — fetch review comments from a GitHub pull request and apply fixes to the local codebase. Requires gh CLI.
15review-changes
End-to-end review pipeline — creates a handoff, generates a review (self-review or paste-ready for another provider), then offers to fix findings. Use when you want to review your changes before pushing.
13