kill-argument

Installation
SKILL.md

Kill Argument Exercise: Adversarial Attack-Defense Review

Stress-test the headline claims of a paper against the strongest possible rejection argument: $ARGUMENTS

Why This Exists

Standard score-based reviews (/peer-review, /research-review, /auto-paper-improvement-loop) tend to produce balanced weakness lists. Each weakness gets ~equal attention, ranked CRITICAL > MAJOR > MINOR. Empirically, this misses one specific failure mode: the single most damaging argument a reviewer would write in a rejection paragraph — the one sentence that, if a senior area chair reads it, kills the paper.

A balanced reviewer might list "scope-overclaim risk" as MAJOR alongside 3-5 other MAJORs, never quite committing. An adversarial reviewer must commit: their entire job is to convince the area chair to reject in 200 words.

This skill runs that adversarial pass deliberately, then forces a second fresh reviewer to defend point-by-point, classify each rejection as already-fixed / partially-fixed / still-unresolved, and surface what's actually load-bearing.

Empirical motivation: in a real submission run, after several rounds of standard improvement (score 7-8/10), the kill-argument exercise surfaced framing weaknesses that no prior review caught (e.g., a setting being mostly conditional rather than truly general, or a baseline being irrelevant to real systems). Author rebuttal forced explicit scope qualifications in abstract and discussion that weren't visible from the score-based reviews alone.

How This Differs From Other Review Skills

Skill What it asks the reviewer Output
/peer-review "Score this paper, list weaknesses by severity" balanced weakness list
Related skills

More from wanshuiyin/auto-claude-code-research-in-sleep

Installs
21
GitHub Stars
9.2K
First Seen
9 days ago