Cover of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All

Where to Buy

Affiliate links coming soon. Purchases will help support this project.

If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All

7

Eliezer Yudkowsky & Nate Soares

Year
2025 AD
Country
United States
Language
English
Genre
AI safety/Philosophy
Work Type
Non-fiction
Pages
304
Designation
Minor
Century
21st c.

GBM Assessment (Score: 7/10)

Eliezer Yudkowsky and Nate Soares's If Anyone Builds It, Everyone Dies is a New York Times bestseller that presents the case for existential risk from artificial superintelligence in accessible, urgent prose. Drawing on decades of work at the Machine Intelligence Research Institute, the book distills the core arguments of AI alignment into a form that has reached a broad public audience. The volume captures the rationalist and effective altruist worldview that has profoundly shaped AI safety discourse since approximately 2015, and key executives at major AI laboratories — including Anthropic's Amodei, OpenAI's Sutskever, and DeepMind's Hassabis — have all engaged substantively with Yudkowsky's arguments.

Published in September 2025, the book arrives at a moment when artificial intelligence has moved from academic speculation to the center of global policy debate. Yudkowsky effectively founded the field of AI alignment, and his earlier writings — the 'Sequences' published on LessWrong between 2006 and 2009 — helped shape the rationalist movement that would go on to influence Silicon Valley's approach to technology and risk. Effective Altruism and rationalism became dominant ideological frameworks within AI laboratories, most notably at Anthropic, founded by former OpenAI researchers citing safety concerns. The 2023 open letter on AI extinction risk, signed by hundreds of researchers and industry leaders, reflected decades of advocacy by Yudkowsky and his intellectual community. Whether he proves to be a prescient prophet or an overcautious catastrophist, his influence on how the AI industry conceives of its own responsibilities is beyond dispute.

The AI Era, 2025

2025 AD

Artificial intelligence dominates global discourse. ChatGPT (launched 2022) has made AI a household word. Anthropic, OpenAI, Google DeepMind, and Meta race to build increasingly powerful models. Yudkowsky and Soares publish their NYT bestseller arguing superhuman AI would inevitably cause human extinction. Whether one agrees or not, the book crystallizes a worldview — rationalism, effective altruism, AI alignment — that has profoundly shaped how Silicon Valley thinks about its own creations. Krasznahorkai wins the Nobel. The AI safety debate is the defining intellectual conflict of the decade.

Awards & Adaptations

NYT Bestseller. New Yorker & Guardian Best of 2025. TIME 100 Most Influential in AI. Shaped AI safety discourse.

Recommended Edition

Little, Brown and Company (2025)

ISBN-13: 9780316595643
ISBN-10: 1847928935
Editions: 3
Open Library: View