write-a-prd
This skill will be invoked when the user wants to create a PRD. You may skip steps if you don't consider them necessary.
-
Ask the user for a long, detailed description of the problem they want to solve and any potential ideas for solutions.
-
Explore the repo to verify their assertions and understand the current state of the codebase.
-
Interview the user relentlessly about every aspect of this plan until you reach a shared understanding. Walk down each branch of the design tree, resolving dependencies between decisions one-by-one.
-
Sketch out the major modules you will need to build or modify to complete the implementation. Actively look for opportunities to extract deep modules that can be tested in isolation.
A deep module (as opposed to a shallow module) is one which encapsulates a lot of functionality in a simple, testable interface which rarely changes.
Check with the user that these modules match their expectations. Check with the user which modules they want tests written for.
- Once you have a complete understanding of the problem and solution, use the template below to write the PRD. Ask the user which Linear team to create the issue in, then submit it as a Linear issue using the Linear MCP tools (save_issue with title and description).
Problem Statement
More from maxmurr/skills
tdd
Guides agent through test-driven development using red-green-refactor. Use when user mentions TDD, red-green-refactor, test-first development, outside-in TDD, mockist TDD, London-school TDD, acceptance TDD, or double-loop TDD. Do not use for writing E2E/Playwright tests, configuring test runners or frameworks, adding tests without TDD methodology, or general testing advice.
10index-knowledge
Generate hierarchical AGENTS.md knowledge base for a codebase (root + complexity-scored subdirs), then align CLAUDE.md symlinks so Cursor/Claude see the same content. Use when user runs /index-knowledge, asks to regenerate AGENTS.md hierarchy, or refresh codebase knowledge docs.
8prompt-caching
Implements LLM prompt caching (KV caching) for Anthropic, OpenAI, and Google Gemini APIs. Use when optimizing LLM API costs with cached prompts, reducing time-to-first-token latency, adding cache_control breakpoints, debugging cached vs uncached token usage, or structuring prompts for maximum cache hit rates. Do not use for HTTP response caching, CDN or browser caching, Redis or in-memory application caching, LLM fine-tuning, embedding caching, or general API integration without caching concerns.
2prompt-master
Generates effective, well-structured prompts for LLMs using the Anthropic Prompt Template technique. Use when the user wants to create a new LLM prompt, restructure an existing prompt, or improve prompt quality. Do not use for general text writing, non-LLM content generation, prompt debugging, prompt evaluation, or running/testing prompts.
2prd-to-issues
Break a PRD into independently-grabbable Linear issues using tracer-bullet vertical slices. Use when user wants to convert a PRD to issues, create implementation tickets, or break down a PRD into work items.
1writing-evals
Write evaluation suites using evalite. Generates .eval.ts files, scorers, and test data from code inspection. Use when creating evals, writing scorers, benchmarking AI capabilities, or setting up evalite in a package.
1