create-interview-guide
Create Interview Guide
Overview
Generate a structured customer interview guide based on The Mom Test (Rob Fitzpatrick). Every question is designed to extract truthful signal about past behavior, real pain, and actual workflows — not hypothetical opinions or compliments. The guide includes a "Bad vs Good" column so the interviewer can self-correct in real time, and "What to listen for" cues so they know what signal to capture.
Workflow
-
Read product context — Scan
.chalk/docs/product/for the product profile (0_product_profile.md), existing JTBD canvases, research syntheses, and any prior interview guides. Check.chalk/docs/product/for research synthesis docs to avoid re-asking already-answered questions. If no product context exists, work from what the user provides. -
Parse the research goal — Extract from
$ARGUMENTSthe topic, hypothesis, or area of exploration. If the user provides a vague topic (e.g., "onboarding"), ask one round of clarifying questions: Who are we interviewing? What decisions will this research inform? What do we already believe is true? -
Identify the interview persona — Determine who will be interviewed based on product context or user input. Note their likely context, role, and relationship to the problem space. This shapes the warm-up questions and the vocabulary used throughout.
-
Generate warm-up questions (5 questions) — These build rapport and establish context. They ask about the person's role, daily workflow, and general environment. No product-related questions yet. Purpose: make the interviewee comfortable and give the interviewer context to ask better follow-up questions.
-
Generate core questions (8-12 questions) — These are the heart of the interview. Every question must pass all Mom Test rules:
- Asks about past behavior, not hypothetical futures
- Asks for specifics (last time, specific instance), not generalizations
More from generaljerel/chalk-skills
python-clean-architecture
Clean architecture patterns for Python services — service layer, repository pattern, domain models, dependency injection, error hierarchy, and testing strategy
23create-handoff
Generate a handoff document after implementation work is complete — summarizes changes, risks, and review focus areas for the review pipeline. Use when done coding and ready to hand off for review.
16create-review
Bootstrap a local AI review pipeline and generate a paste-ready review prompt for any reviewer agent. Use after creating a handoff or when ready to get an AI code review.
15fix-findings
Fix findings from the active review session — reads reviewer findings files, applies fixes by priority, and updates the resolution log. Use after pasting reviewer output into findings files.
15fix-review
When the user asks to fix, address, or work on PR review comments — fetch review comments from a GitHub pull request and apply fixes to the local codebase. Requires gh CLI.
15review-changes
End-to-end review pipeline — creates a handoff, generates a review (self-review or paste-ready for another provider), then offers to fix findings. Use when you want to review your changes before pushing.
13