About this episode
AI is not just getting smarter. It is getting faster by learning how to optimize the hardware it runs on. In this episode, Sharon Zhou, VP of AI at AMD and former Stanford AI researcher, explains how language models are beginning to write and optimize their own GPU kernel code. We explore what self improving AI actually means, how reinforcement learning is used in post training, and why kernel optimization could be one of the most overlooked scaling levers in modern AI. Sharon breaks down how GPU efficiency impacts the cost of training and inference, why catastrophic forgetting remains a challenge in continual learning, and how verifiable rewards from hardware profiling can help models improve themselves. The conversation also dives into compute economics, synthetic data, RLHF, and why infrastructure may define the next phase of AI progress. If you want to understand where AI scaling is really happening beyond bigger models and more data, this episode goes under the hood. Stay Updated: Craig Smith on X: https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Preview and Intro (00:25) Sharon Zhou's Background and Transition to AMD (02:00) What Is Self-Improving