create-threat-model
Create Threat Model
Analyze the current codebase and produce a structured threat model at .turbo/threat-model.md.
The threat model describes the current state of the codebase: what it protects, where trust boundaries are, how it can be attacked, what defenses exist, and how severe each risk is. It is descriptive, not prescriptive. Do not include remediation recommendations.
Optional: $ARGUMENTS may specify scope (directories, modules, or focus areas). When scope is provided, limit reconnaissance and code discovery to the specified directories or modules. Still produce all four sections, but title the overview to reflect the narrowed scope and note what is excluded.
Step 1: Reconnaissance
Build a mental model of the system before analyzing threats.
- Read the project README, CLAUDE.md, and any architecture or security documentation.
- Examine top-level directory structure, build files, and dependency manifests to identify modules, languages, frameworks, and deployment model.
- Classify the application type: library, CLI tool, web service, desktop app, mobile app, or hybrid. This determines which threat categories and trust boundary patterns apply.
- Identify security-critical dependencies (crypto libraries, auth providers, network stacks, native/FFI libraries). Note what this codebase delegates versus what it owns.
- Read any existing security documentation:
SECURITY.md, audit reports, threat models, or changelog entries mentioning CVEs.
Step 2: Security-Relevant Code Discovery
More from tobihagemann/turbo
find-dead-code
Find dead code using parallel subagent analysis and optional CLI tools, treating code only referenced from tests as dead. Use when the user asks to \"find dead code\", \"find unused code\", \"find unused exports\", \"find unreferenced functions\", \"clean up dead code\", or \"what code is unused\". Analysis-only — does not modify or delete code.
31simplify-code
Run a multi-agent review of changed files for reuse, quality, efficiency, and clarity issues followed by automated fixes. Use when the user asks to \"simplify code\", \"review changed code\", \"check for code reuse\", \"review code quality\", \"review efficiency\", \"simplify changes\", \"clean up code\", \"refactor changes\", or \"run simplify\".
24investigate
Systematically investigate bugs, test failures, build errors, performance issues, or unexpected behavior by cycling through characterize-isolate-hypothesize-test steps. Use when the user asks to \"investigate this bug\", \"debug this\", \"figure out why this fails\", \"find the root cause\", \"why is this broken\", \"troubleshoot this\", \"diagnose the issue\", \"what's causing this error\", \"look into this failure\", \"why is this test failing\", or \"track down this bug\".
23smoke-test
Launch the app and hands-on verify that it works by interacting with it. Use when the user asks to \"smoke test\", \"test it manually\", \"verify it works\", \"try it out\", \"run a smoke test\", \"check it in the browser\", or \"does it actually work\". Not for unit/integration tests.
23finalize
Run the post-implementation quality assurance workflow including tests, code polishing, review, and commit. Use when the user asks to \"finalize implementation\", \"finalize changes\", \"wrap up implementation\", \"finish up\", \"ready to commit\", or \"run QA workflow\".
23evaluate-findings
Critically assess external feedback (code reviews, AI reviewers, PR comments) and decide which suggestions to apply using adversarial verification. Use when the user asks to \"evaluate findings\", \"assess review comments\", \"triage review feedback\", \"evaluate review output\", or \"filter false positives\".
23