About this episode
Eliezer Yudkowsky is as afraid as you could possibly be. He makes his case.Yudkowsky is a pioneer of A.I. safety research, who started warning about the existential risks of the technology decades ago, – influencing a lot of leading figures in the field. But over the last couple of years, talk of an A.I. apocalypse has become a little passé. Many of the people Yudkowsky influenced have gone on to work for A.I. companies, and those companies are racing ahead to build the superintelligent systems Yudkowsky thought humans should never create. But Yudkowsky is still out there sounding the alarm. He has a new book out, co-written with Nate Soares, “If Anyone Builds It, Everyone Dies,” trying to warn the world before it’s too late.So what does Yudkowsky see that most of us don’t? What makes him so certain? And why does he think he hasn’t been able to persuade more people?Mentioned:Oversight of A.I.: Rules for Artificial IntelligenceIf Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares“A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.” by Kashmir HillBook Recommendations:A Step Farther Out by Jerry PournelleJudgment under Uncertainty by Daniel Kahneman, Paul Slovic, and Amos TverskyProbability Theory by E. T. JaynesThoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our executive producer is Claire Gordon. The show’s production team also includes Marie Casci