code-review
Code Review
v1.0 — Structured code review for research code, drawing on DIME, Gentzkow-Shapiro, AEA, and IPA standards
Review research code (Stata, R, or Python) against economics-specific quality standards. Catches silent failures, reproducibility risks, and style issues that generic linters miss.
Argument: $ARGUMENTS
- Path to a file (
.do,.R,.py) or a directory - Or a project name (will look in
~/Dropbox/Github/[project]/)
Modes (append to argument):
quick(default) — Single-file review: correctness, reproducibility risks, stylefull— Deep single-file review with project context (reads master do-file, config, related files)pipeline— Multi-file review: trace the full analysis pipeline, check dependencies and flowreplication— AEA replication package audit (README, data citations, reproducibility, completeness)
Flags:
fix— Also output a corrected version of the file (otherwise review-only)severity:high— Only report high-severity issues (skip style nitpicks)
More from thinkingwithagents/skills
academic-beamer-deck
>
13review-paper
Adversarial paper review simulating a skeptical referee — checks identification, statistical claims, robustness, and presentation against real referee patterns
9lit-review
Structured multi-session literature review workflow — scaffolds reviews, tracks papers, generates slide decks or documents, and runs referee passes for canonical paper coverage
9econ-audit
Adversarial econometrics review — catches specification errors, clustering mistakes, bad controls, and silent analytical failures in Stata, R, or Python code
9research-brainstorm
Brainstorm and stress-test research ideas as a senior scholar colleague would. Runs a multi-turn dialogue that clarifies the seed question (or generates one from cold start), probes it conversationally, launches a parallel deep literature scan across published and working-paper sources, critiques the question adversarially, generates 2–3 alternative framings, assesses feasibility (delegating to the find-data skill when available), and writes a research brief to the working directory. Trigger when the user asks to brainstorm a research idea, stress-test a research question, workshop a project, develop a new paper idea, assess novelty of a question, evaluate whether an idea is worth pursuing, refine a research direction, or check whether an idea clears a top-journal bar. Also trigger on phrases like "brainstorm with me", "is this novel", "has anyone done this", "what should I work on", "help me think through this idea", "workshop this question", "stress-test this", "poke holes in this", "new research idea", "research project idea", or when the user describes a half-formed question and asks for feedback. Use regardless of field but tuned for empirical economics and adjacent social sciences.
9find-data
Help researchers identify and catalog relevant datasets for empirical research. Use this skill whenever a user asks to find data, locate datasets, identify data sources, or asks "what data exists for...", "where can I get data on...", "is there a dataset that...", or similar. Also trigger when the user describes a research question and needs to know what data could support it, asks about data availability for a particular topic or geography, wants to compare data sources, or mentions needing panel data, cross-sectional data, or time-series data for a project. Trigger on mentions of "find data", "data search", "dataset", "data source", "microdata", "administrative data", "survey data", "public-use data", "restricted data", or when the user specifies parameters like time windows, geographic levels, or data frequency in the context of research. Also trigger when the user asks for help downloading, scraping, or structuring a dataset that has been identified.
8