data-dictionary
Data Dictionary Generator
v1.0 — Auto-generate comprehensive codebooks from .dta files
Read Stata .dta files and produce a structured markdown data dictionary with variable names, types, labels, value labels, summary statistics, and missingness. Outputs a ready-to-use codebook document.
Argument: $ARGUMENTS
- Path to a .dta file or directory containing .dta files
Modes (append to argument):
summary(default) — One-page overview: variable list with types, labels, missingnessfull— Comprehensive codebook: summary + value labels + summary stats + distributions for key variablesanalysis— Analysis-ready: full + notes on which variables are outcomes vs controls, indices vs components
Flags:
vars:consumption,assets— Only document variables matching these patternsoutput:path/to/output.md— Custom output path (default: same directory as input, namedcodebook_[filename].md)format:md(default) |format:csv— Output format
More from thinkingwithagents/skills
academic-beamer-deck
>
13review-paper
Adversarial paper review simulating a skeptical referee — checks identification, statistical claims, robustness, and presentation against real referee patterns
9lit-review
Structured multi-session literature review workflow — scaffolds reviews, tracks papers, generates slide decks or documents, and runs referee passes for canonical paper coverage
9econ-audit
Adversarial econometrics review — catches specification errors, clustering mistakes, bad controls, and silent analytical failures in Stata, R, or Python code
9research-brainstorm
Brainstorm and stress-test research ideas as a senior scholar colleague would. Runs a multi-turn dialogue that clarifies the seed question (or generates one from cold start), probes it conversationally, launches a parallel deep literature scan across published and working-paper sources, critiques the question adversarially, generates 2–3 alternative framings, assesses feasibility (delegating to the find-data skill when available), and writes a research brief to the working directory. Trigger when the user asks to brainstorm a research idea, stress-test a research question, workshop a project, develop a new paper idea, assess novelty of a question, evaluate whether an idea is worth pursuing, refine a research direction, or check whether an idea clears a top-journal bar. Also trigger on phrases like "brainstorm with me", "is this novel", "has anyone done this", "what should I work on", "help me think through this idea", "workshop this question", "stress-test this", "poke holes in this", "new research idea", "research project idea", or when the user describes a half-formed question and asks for feedback. Use regardless of field but tuned for empirical economics and adjacent social sciences.
9code-review
Structured code review for research code (Stata, R, Python) against DIME, Gentzkow-Shapiro, AEA, and IPA standards — catches silent failures, reproducibility risks, and style issues
8