agent-change-walkthrough
Agent Change Walkthrough Skill
Purpose
Generate a single-story walkthrough of AI-authored code changes explaining implementation from trigger to final behavior.
Core Method
Follow six steps:
- Capture intent — Restate the change in plain language with scope and non-goals
- Build evidence — Collect git diffs using
git status,git diff,git show - Build story stack — Order steps dependency-first (contracts/types before usage, definitions before invocations)
- Write narrative — Each step: clear title, CHANGED/UNCHANGED marker, filename with line number, code snippet, prose explanation
- Integrate analysis — Add trade-offs, alternatives, performance notes, and risks inline at relevant steps
- Close out — Summarize what changed, why behavior differs, what to monitor
Key Principles
More from iamladi/cautious-computing-machine--sdlc-plugin
codex
Use when the user asks to run Codex CLI (codex exec, codex resume) or references OpenAI Codex for code analysis, refactoring, or automated editing. Resolves the latest flagship model from the model registry.
10gemini
Use when the user asks to run Gemini CLI for code review, plan review, or big context (>200k) processing. Ideal for comprehensive analysis requiring large context windows. Resolves the latest flagship model from the model registry.
7interview
Interview me about anything in depth
7tdd
TDD enforcement during implementation. Reads `tdd:` setting from CLAUDE.md. Modes - strict (human approval for escape), soft (warnings), off (disabled). Auto-invoked by /implement.
6x-search
Search X/Twitter for real-time developer discourse, product feedback, community sentiment, and expert opinions. Use when user says "x search", "search x for", "search twitter for", "what are people saying about", or needs recent X discourse for context (library releases, API changes, product launches, industry discussion). Also use when researching a library, framework, API, or product to supplement web search with real-time community signal — e.g. "research Bun", "what do devs think of Hono", "is Turso production-ready".
1judgment-eval
Evaluates agent judgment quality through scenario-based testing in-conversation. Use when the user wants to test, validate, or stress-test an agent, skill, or command definition — e.g. "test this agent", "evaluate this skill", "does this prompt handle edge cases", "check this agent's judgment", or after writing or modifying any agent/skill/command .md file.
1