INSTANT NEW YORK TIMES BESTSELLER | The New Yorker ' s Best Books of 2025 | The Guardian
' s Best Books of 2025 | A 2025 Booklist Editors' Choice Pick The scramble to create
superhuman AI has put us on the path to extinction—but it’s not too late to change course as
two of the field’s earliest researchers explain in this clarion call for humanity. "May prove
to be the most important book of our time.”—Tim Urban Wait But Why In 2023 hundreds of AI
luminaries signed an open letter warning that artificial intelligence poses a serious risk of
human extinction. Since then the AI race has only intensified. Companies and countries are
rushing to build machines that will be smarter than any person. And the world is devastatingly
unprepared for what would come next. For decades two signatories of that letter—Eliezer
Yudkowsky and Nate Soares—have studied how smarter-than-human intelligences will think behave
and pursue their objectives. Their research says that sufficiently smart AIs will develop goals
of their own that put them in conflict with us—and that if it comes to conflict an artificial
superintelligence would crush us. The contest wouldn’t even be close. How could a machine
superintelligence wipe out our entire species? Why would it want to? Would it want anything at
all? In this urgent book Yudkowsky and Soares walk through the theory and the evidence
present one possible extinction scenario and explain what it would take for humanity to
survive. The world is racing to build something truly new under the sun. And if anyone
builds it everyone dies. “The best no-nonsense simple explanation of the AI risk problem
I' ve ever read.”—Yishan Wong Former CEO of Reddit