About this episode
Eric Ho is building Goodfire to solve one of AI’s most critical challenges: understanding what’s actually happening inside neural networks. His team is developing techniques to understand, audit and edit neural networks at the feature level. Eric discusses breakthrough results in resolving superposition through sparse autoencoders, successful model editing demonstrations and real-world applications in genomics with Arc Institute's DNA foundation models. He argues that interpretability will be critical as AI systems become more powerful and take on mission-critical roles in society.
Hosted by Sonya Huang and Roelof Botha, Sequoia Capital
Mentioned in this episode:
Mech interp: Mechanistic interpretability, list of important papers here
Phineas Gage: 19th century railway engineer who lost most of his brain’s left frontal lobe in an accident. Became a famous case study in neuroscience.
Human Genome Project: Effort from 1990-2003 to generate the first sequence of the human genome which accelerated the study of human biology
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
Zoom In: An Introduction to Circuits: First important mechanistic interpretability paper from OpenAI in 2020
Superposition: Concept from physics applied to interpretability that allows neural networks to simulate larger networks (e.g. more concepts than neurons)
Apollo Research: AI safety company that designs AI model evaluations and conducts interpretability research
Towards Monosemanticity: Decomposing Language Models With Dictionary Learning. 2023 Anthropic paper that uses a sparse autoencoder to extract interpretable features; followed by Scaling Monosemanticity
Under the Hood of a Reasoning Model: 2025 Goodfire paper that interprets DeepSeek’s reasoning model R1