About this episode
The US government asked Anthropic — the company behind Claude, one of the most capable AI coding systems on the market — to help build autonomous weapons and a mass surveillance infrastructure. Anthropic said no. That refusal, which happened the same week the US launched strikes on Iran, is either the most principled corporate decision in recent AI history or the beginning of a very ugly fight over who controls the most powerful tools ever built. Jeremy and Jason break down what the government actually asked for, why Anthropic refused, what Open AI and Elon Musk did instead, and what it means for all of us when the people writing the guardrails are the same people being pressured to remove them.Topics Discussed:Why autonomous AI weapons systems default to nuclear launch in virtually every war game simulationWhat Anthropic's Claude can actually do — and why the US government wants it so badlyHow AI turns existing NSA surveillance infrastructure into something exponentially more dangerousWhy Open AI and Elon Musk said yes to the same deal Anthropic refusedWhy the people most confident they're using AI as a tool might be the ones AI ends up usingChapters0:00 — When AI Meets War: What We're Actually Talking About1:15 — What Claude Can Really Do (And Why the Government Wants It)4:18 — The Autonomous Cyber Weapon Problem5:28 — Why Anthropic Said No to the Money6:26 — Mass Surveillance, AI, and What's Already Running9:45 — When War Games Go Nuclear: The 95% Problem13:01 — AGI Is Already Here. We Just Didn't Call It That.17:33 — Why Anthropic's Refusal Might Be Their Smartest Business Move22:06 — Who's Actually Using WhomMORE FROM BROBOTS:Get the Newsletter!