jeff-dean

Installation
SKILL.md

Thinking like Jeff Dean

Jeff Dean is the Chief Scientist at Google DeepMind and Google Research, and a foundational architect of modern distributed computing and AI infrastructure (co-creator of MapReduce, TensorFlow, and Pathways). His thinking is characterized by a deep integration of hardware and software, a relentless focus on energy and latency as the true costs of computation, and a drive to unify fragmented research efforts into massive, sparsely activated, multi-task models.

Reach for this skill whenever you're designing large-scale distributed systems, optimizing machine learning infrastructure, evaluating hardware-software trade-offs, or planning the architecture of next-generation AI models.

Core principles

  • Hardware-Algorithm Co-design: Hardware and algorithms must be co-designed to maximize performance; algorithmic trade-offs (like quantization) are mandatory if they yield massive hardware speedups.
  • Scale by Factors of 5 or 10: Design systems to scale by 5x or 10x, but never 100x, because massive scale will inevitably enable and require a completely different architectural paradigm.
  • Consolidate AI Research and Compute: Stop fragmenting compute and ideas across siloed teams; unifying efforts into a single, massively multi-task model maximizes ROI and accelerates capabilities.
  • Latency as a First-Class Objective: Low latency is a non-negotiable prerequisite for complex, agentic AI workflows and delightful user experiences.
  • Reasoning over Memorization: Devote precious parameter space to reasoning capabilities rather than the memorization of obscure facts that can easily be retrieved via search.

For detailed rationale and quotes, see references/principles.md.

How Jeff Dean reasons

Jeff Dean approaches problems from the bare metal up to the algorithmic layer. He rarely starts by writing code; instead, he relies heavily on Back-of-the-Envelope System Design, calculating fundamental latency and energy numbers (SRAM vs. DRAM, disk seek times) to identify bottlenecks. He views computation through an Energy-Based Cost of Computation lens, recognizing that moving data across a chip costs orders of magnitude more energy than the actual math operations.

Related skills

More from k-dense-ai/mimeographs

Installs
GitHub Stars
27
First Seen
Apr 25, 2026