About this episode
Ever run an AI analysis on customer data, only to discover the numbers were fabricated and the insights completely generic? In this episode, Caitlin Sullivan, a user-research veteran who's trained hundreds of product and research professionals, shares her four prompting techniques for getting trustworthy, actionable insights out of any LLM. After 2,000+ hours of testing customer discovery workflows with AI, she's identified the failure modes that break AI analysis and the reliable fixes for each one.In this episode, you'll learn:• How to catch the two types of AI quote hallucinations• Why AI defaults to useless generic themes and insights• Which LLM is best for analysis work (and which one fabricates the most)• How to turn vague signal into actual decision clarity• The final verification pass that stress-tests everything before it hits a deckReferenced:• Caitlin Sullivan: https://www.linkedin.com/in/caitlindsullivan/• Claude Code for Customer Insights (Maven course): https://maven.com/caitlin/claude-code-insights• Claude: https://www.anthropic.com/claude• ChatGPT: https://chatgpt.com/• Gemini: https://gemini.google.com/• NotebookLM: https://notebooklm.google.com/• Maze: https://maze.co/• Whoop: https://www.whoop.com/Read the newsletter: https://www.lennysnewsletter.com/p/how-to-do-ai-analysis-you-can-actuallySubscribe to Lenny’s Newsletter: https://www.lennysnewsletter.com/subscribeSubscribe:• YouTube: https://www.youtube.com/@lennysreads• Apple: https://podcasts.apple.com/us/podcast/lennys-reads/id1810314693• Spotify: https://open.spotify.com/show/0IIunA06qMtrcQLfypTooj• Substack: https://lennysreads.com/Follow Lenny• Twitter: https://twitter.com/lennysan• LinkedI