clean-ai-slop
AI Slop Cleaner
A corrective discipline for cleaning AI-generated code. Runs after code generation — whether from run-plan, a manual session, or any other source.
The core problem: LLMs produce code that works but carries distinctive smells. Over-commenting, unnecessary abstractions, defensive paranoia for impossible scenarios, verbose naming. Left unchecked, these accumulate into a codebase that is harder to read and maintain than hand-written code.
This skill removes those smells systematically, one category at a time, without changing behavior.
Hard Gates
These rules have no exceptions.
- Lock behavior before cleaning. Run existing tests. If coverage is insufficient, add regression tests for the code you're about to touch. No test coverage, no cleanup.
- One smell category per pass. Do not mix dead code removal with naming fixes. Complete one pass, verify, then start the next.
- Run tests after every pass. If tests fail, revert the pass and investigate. Do not proceed to the next category.
- Stay in scope. Only touch files that were generated or modified by AI. Do not expand into "nearby" code that looks like it could use improvement.
- Preserve behavior exactly. If a cleanup changes observable behavior — even if you think the new behavior is "better" — revert it. Behavior changes require a separate task.
When To Use
More from tmdgusya/engineering-discipline
clarification
Use when a user's request is vague, ambiguous, or underspecified. Launches an iterative Q&A loop to resolve ambiguity while a subagent explores the codebase in parallel. Outputs a clear, well-scoped context brief so the user can plan sharply. Triggers on "I want to...", "I need...", "let's build...", "can you help me...", "we should...", or any request where the full scope isn't immediately clear.
36rob-pike
Rob Pike's 5 Rules of Programming — a decision framework that prevents premature optimization and enforces measurement-driven development. Use when the user says "optimize", "slow", "performance", "bottleneck", "speed up", "make faster", "too slow", or any request to improve code speed/efficiency. Also use when you notice yourself about to suggest a performance optimization without measurement data. This is a thinking discipline, not a tooling workflow.
35run-plan
Use when you have a written implementation plan to execute. Loads the plan, reviews critically, executes tasks in dependency order, and reports completion. Triggers when the user says "run the plan", "execute the plan", or "let's start implementing".
35systematic-debugging
Use when encountering any bug, test failure, or unexpected behavior. Enforces a strict reproduce-first, root-cause-first, failing-test-first debugging workflow before fixing.
33plan-crafting
Use when a task's scope is clear and multi-step implementation is needed, before touching code. Triggered after clarification is complete, or when the user explicitly requests plan creation with a clear prompt.
32long-run
Orchestrates multi-day execution of complex tasks through milestones. Each milestone goes through plan-crafting, run-plan (worker-validator), and review-work phases with checkpoint/recovery. Triggers when the user says "long run", "start long run", "execute milestones", or "run all milestones".
30