About this episode
Some people say that all that’s necessary to improve the capabilities of AI is to scale up existing systems. That is, to use more training data, to have larger models with more parameters in them, and more computer chips to crunch through the training data. However, in this episode, we’ll be hearing from a computer scientist who thinks there are many other options for improving AI. He is Alexander Ororbia, a professor at the Rochester Institute of Technology in New York State, where he directs the Neural Adaptive Computing Laboratory.David had the pleasure of watching Alex give a talk at the AGI 2024 conference in Seattle earlier this year, and found it fascinating. After you hear this episode, we hope you reach a similar conclusion. Selected follow-ups:Alexander Ororbia - Rochester Institute of TechnologyAlexander G. Ororbia II - Personal websiteAGI-24: The 17th Annual AGI Conference - AGI SocietyJoseph Tranquillo - Bucknell UniversityHopfield network - WikipediaKarl Friston - UCLPredictive coding - WikipediaMortal Computation: A Foundation for Biomimetic Intelligence - Quantitative BiologyThe free-energy principle: a unified brain theory? - Nature Reviews NeuroscienceI Am a Strange Loop (book by Douglas Hofstadter) - WikipediaMark Solms - WikipediaConscium: Pioneering Safe, Efficient AIThe Hidden Spring: A Journey to the Source of Consciousness (book by Mark Solms)Carver Mead - WikipediaEvent camera (includes Dynamic Vision Sensors) - WikipediaICRA (International Conference on Robotics and Automation)Brain-Inspired Machine Intelligence: A Survey of Neurobiologically-Plausible Credit AssignmentA Review of Neuroscience-Inspired Machine Learning