About this episode
Happy Friday, everyone! This week’s update is heavily shaped by you. After some recent feedback, I’m working to be intentional about highlighting not just the risks of AI, but also examples of some real wins I’m involved in. Amidst all the dystopian noise, I want people to know it’s possible for AI to help people, not just hurt them. You’ll see that in the final segment, which I’ll try and include each week moving forward.Oh, and one of this week’s stories? It came directly from a listener who shared how an AI system nearly wrecked their life. It’s a powerful reminder that what we talk about here isn’t just theory; it’s affecting real people, right now.Now, all four updates this week deal with the tension between moving fast and being responsible. It emphasizes the importance of being intentional about how we handle power, pressure, and people in the age of AI.With that, let’s get into it.?ChatGPT Didn’t Leak Your Private Conversations, But the Panic Reveals a Bigger ProblemYou probably saw the headlines: “ChatGPT conversations showing up in Google search!” The truth? It wasn’t a breach, well, at least not how you might think. It was a case of people moving too fast, not reading the fine print, and accidentally sharing public links. I break down what really happened, why OpenAI shut the feature down, and what this teaches us about the cultural costs of speed over discernment.?Workday’s AI Hiring Lawsuit Just Took a Big TurnWorkday’s already in court for alleged bias in its hiring AI, but now the judge wants a full list of every company that used it. Ruh-Roh George! This isn’t just a vendor issue anymore. I unpack how this sets a new legal precedent, what it means for enterprise leaders, and why blindly trusting software could drag your company into consequences you didn’t see coming.?How AI Nearly Cost One Man His Life-Saving MedicationA listener shared a personal story about how an AI system denied his long-standing prescription with zero human context. Guess what saved it? A wave of people stepped in. It’s a chilling example of what happens when algorithms make life-and-death decisions without context, compassion, or recourse. I explore what this reveals about system design, bias, and the irreplaceable value of human community.?Yes, AI Can Improve Hiring; Here’s a Story Where It DidAs part of my future commitment, I want to end with a win. I share a project I worked on where AI actually helped more people get hired by identifying overlooked talent and recommending better-fit roles. It didn’t replace people; it empowered them. I walk through how we designed it, what made it work, and why this kind of human-centered AI is not only po