auto-review-loop

Installation
SKILL.md

Auto Review Loop: Autonomous Research Improvement

Autonomously iterate: review → implement fixes → re-review, until the external reviewer gives a positive assessment or MAX_ROUNDS is reached.

Context: $ARGUMENTS

Constants

  • MAX_ROUNDS = 4
  • POSITIVE_THRESHOLD: score >= 6/10, or verdict contains "accept", "sufficient", "ready for submission"
  • REVIEW_DOC: review-stage/AUTO_REVIEW.md (cumulative log) (fall back to ./AUTO_REVIEW.md for legacy projects)
  • REVIEWER_MODEL = gpt-5.4 — Model used via Codex MCP. Must be an OpenAI model (e.g., gpt-5.4, o3, gpt-4o)
  • REVIEWER_BACKEND = codex — Default: Codex MCP (xhigh). Override with — reviewer: oracle-pro for GPT-5.4 Pro via Oracle MCP. See shared-references/reviewer-routing.md.
  • OUTPUT_DIR = review-stage/ — All review-stage outputs go here. Create the directory if it doesn't exist.
  • HUMAN_CHECKPOINT = false — When true, pause after each round's review (Phase B) and present the score + weaknesses to the user. Wait for user input before proceeding to Phase C. The user can: approve the suggested fixes, provide custom modification instructions, skip specific fixes, or stop the loop early. When false (default), the loop runs fully autonomously.
  • COMPACT = false — When true, (1) read EXPERIMENT_LOG.md and findings.md instead of parsing full logs on session recovery, (2) append key findings to findings.md after each round.
  • REVIEWER_DIFFICULTY = medium — Controls how adversarial the reviewer is. Three levels:
    • medium (default): Current behavior — MCP-based review, Claude controls what context GPT sees.
    • hard: Adds Reviewer Memory (GPT tracks its own suspicions across rounds) + Debate Protocol (Claude can rebut, GPT rules).
Related skills

More from shaun-z/auto-claude-code-research-in-sleep

Installs
8
First Seen
Mar 17, 2026