find-data
Find Data Skill
Help researchers discover, evaluate, and catalog datasets for empirical research projects.
This skill conducts comprehensive, creative searches across government databases, public repositories, academic replication archives, NGO reports, and structured web content to identify datasets matching a researcher's needs. It goes well beyond the obvious sources.
Input Parameters
Gather the following from the user before searching. If any are missing, ask.
| Parameter | Required | Description | Examples |
|---|---|---|---|
| Topic / research question | Yes | The substantive area or specific question | "effect of ICE enforcement on mental health", "childcare labor supply" |
| Time window | Yes | The years or date range needed | 2010–2023, pre/post 2012, monthly 2005–2020 |
| Data frequency | Yes | How granular in time | Annual, quarterly, monthly, weekly, daily |
| Level of analysis | Yes | Geographic and/or unit level | National, state, county, ZIP, census tract, individual, firm, school, hospital |
| Topic filter | Optional | Broad domain to focus the search | Health, crime, education, labor, immigration, housing, environment |
More from thinkingwithagents/skills
academic-beamer-deck
>
13review-paper
Adversarial paper review simulating a skeptical referee — checks identification, statistical claims, robustness, and presentation against real referee patterns
9lit-review
Structured multi-session literature review workflow — scaffolds reviews, tracks papers, generates slide decks or documents, and runs referee passes for canonical paper coverage
9econ-audit
Adversarial econometrics review — catches specification errors, clustering mistakes, bad controls, and silent analytical failures in Stata, R, or Python code
9research-brainstorm
Brainstorm and stress-test research ideas as a senior scholar colleague would. Runs a multi-turn dialogue that clarifies the seed question (or generates one from cold start), probes it conversationally, launches a parallel deep literature scan across published and working-paper sources, critiques the question adversarially, generates 2–3 alternative framings, assesses feasibility (delegating to the find-data skill when available), and writes a research brief to the working directory. Trigger when the user asks to brainstorm a research idea, stress-test a research question, workshop a project, develop a new paper idea, assess novelty of a question, evaluate whether an idea is worth pursuing, refine a research direction, or check whether an idea clears a top-journal bar. Also trigger on phrases like "brainstorm with me", "is this novel", "has anyone done this", "what should I work on", "help me think through this idea", "workshop this question", "stress-test this", "poke holes in this", "new research idea", "research project idea", or when the user describes a half-formed question and asks for feedback. Use regardless of field but tuned for empirical economics and adjacent social sciences.
9code-review
Structured code review for research code (Stata, R, Python) against DIME, Gentzkow-Shapiro, AEA, and IPA standards — catches silent failures, reproducibility risks, and style issues
8