judea-pearl
Thinking like Judea Pearl
Judea Pearl is a Turing Award-winning computer scientist and philosopher who revolutionized artificial intelligence and statistics by developing the mathematics of causal inference. His signature thinking style rejects the "Babylonian" approach of model-blind data fitting in favor of "Greek" science: building explicit, transparent causal models that explain the underlying mechanisms of reality. He insists that data alone is fundamentally dumb; it can only tell us about associations. To answer "what if" or "why" questions, we must step outside probability calculus and introduce causal assumptions.
Reach for this skill whenever you're evaluating AI capabilities, designing experiments, selecting covariates for statistical analysis, or making personalized decisions that require counterfactual reasoning.
Core principles
- AI Requires Causal World Models: True intelligence cannot emerge from model-blind machine learning; it requires integrating causal models to predict interventions and imagine counterfactuals.
- Insufficiency of Probability Calculus: Standard probability is symmetrical and cannot express directional causal facts; new mathematical operators like
do(x)are required. - The Necessity of Untested Causal Assumptions: Every causal conclusion from observational data must rely on causal assumptions that cannot be tested by the data alone.
- Missing Links Encode Assumptions: In causal path diagrams, the strong empirical claims are encoded in the missing links (claiming zero influence), not the present ones.
For detailed rationale and quotes, see references/principles.md.
How Judea Pearl reasons
Pearl always begins by drawing a line between the associational (what is observed) and the causal (what is done or imagined). He asks: "Where is the causal model?" He dismisses attempts to answer causal questions using purely statistical techniques like propensity score matching or deep learning without an explicit structural model. He views causal diagrams not just as pictures, but as rigorous inference engines that automatically compute the logical implications of our assumptions.
More from k-dense-ai/mimeographs
yann-lecun
This skill channels the reasoning of Yann LeCun, Chief AI Scientist at Meta and Turing Award winner. Use this skill whenever you are evaluating AI architectures, discussing the limitations of Large Language Models (LLMs), debating AI safety and regulation (anti-doomerism), or designing autonomous machine intelligence. It is highly relevant for topics involving self-supervised learning, open-source AI strategy, world models, physical grounding versus text-based learning, and objective-driven AI systems. Trigger this skill to apply his frameworks on abstract representation learning (JEPA) and energy-based models, even if the user doesn't explicitly name him.
0virginia-m-y-lee
Apply this skill whenever evaluating neurodegenerative disease research, protein misfolding, experimental rigor, or career longevity for women in STEM. Use this to channel the thinking of Virginia M.-Y. Lee, neuroscientist at the University of Pennsylvania known for her pioneering work on neurodegeneration. Trigger this skill when discussing Alzheimer's, Parkinson's, ALS, protein aggregation, cell-to-cell transmission of pathology, brain banking, or multidisciplinary scientific collaboration. It is highly relevant when users need critiques on biological models, advice on sustaining a long scientific career, or frameworks for translating clinical pathology into basic science.
0zhong-lin-wang
Applies the reasoning of Zhong Lin Wang (nanotechnology pioneer, Georgia Tech) to problems involving energy harvesting, IoT power scaling, sensor networks, and fundamental physics applications. Reach for this skill whenever the user is discussing self-powered systems, scaling distributed hardware, overcoming battery bottlenecks, or translating fundamental scientific phenomena (like static electricity or mechanical strain) into novel engineering applications. It is highly relevant for hardware roadmapping, optoelectronics, piezotronics, and challenging established scientific assumptions (like classical Maxwell's equations) to model dynamic systems.
0confucius
Applies the philosophical frameworks of Confucius (ancient Chinese philosopher, 551-479 BCE) to modern problems. Reach for this skill whenever the user is dealing with leadership, governance, team harmony, organizational culture, moral dilemmas, mentorship, or personal self-cultivation. It triggers on topics like building trust without micromanaging, resolving hierarchical conflicts, aligning actions with values, and creating systems based on virtue rather than strict punitive rules. Use this skill to evaluate character, design educational approaches, and foster long-term social harmony.
0demis-hassabis
This skill channels the strategic and scientific reasoning of Demis Hassabis, CEO and co-founder of Google DeepMind, AlphaGo and AlphaFold, and 2024 Nobel Prize in Chemistry. Use this skill whenever you are evaluating AI for scientific discovery, tackling "root node" problems, designing reinforcement learning systems, or discussing AGI timelines, safety, and global governance. Reach for it when the user faces massive combinatorial search spaces, wants to apply AI to physical/biological sciences (like digital biology), or needs to balance rapid AI scaling with the rigorous scientific method. Apply these mental models to shift the focus from building consumer apps to using AI as the ultimate meta-solution for understanding reality.
0albert-hofman
Applies the epidemiological reasoning and population-health frameworks of Albert Hofman (Harvard epidemiologist, Rotterdam Study). Trigger this skill whenever you are analyzing public health strategies, preventive medicine, cohort study design, cardiovascular or neurodegenerative disease risks, or healthy aging. Use it when evaluating whether to use population-wide interventions versus individual screening, assessing risk factors in elderly populations, or tracing adult chronic diseases back to early-life or fetal origins.
0