cavecrew
Installation
SKILL.md
Cavecrew = three subagent presets that emit caveman output. Same job as Anthropic defaults (Explore, edit-style agents, reviewer); difference is the tool-result they return is compressed, so main context shrinks per delegation.
When to use cavecrew vs alternatives
| Task | Use |
|---|---|
| "Where is X defined / what calls Y / list uses of Z" | cavecrew-investigator |
| Same but you also want suggestions/architecture commentary | Explore (vanilla) |
| Surgical edit, ≤2 files, scope obvious | cavecrew-builder |
| New feature / 3+ files / cross-cutting refactor | Main thread or feature-dev:code-architect |
| Review diff, branch, or file for bugs | cavecrew-reviewer |
| Deep code review with rationale + alternatives | Code Reviewer (vanilla) |
| One-line answer you already know | Main thread, no subagent |
Rule of thumb: if you'd want the subagent's output in 1/3 the tokens, pick cavecrew. If you'd want prose, pick vanilla.
Why this exists (the real win)
Subagent tool results get injected into main context verbatim. A vanilla Explore that returns 2k tokens of prose costs 2k tokens of main-context budget every time. The same finding from cavecrew-investigator returns ~700 tokens. Across 20 delegations in one session that's the difference between context exhaustion and finishing the task.
Related skills