About this episode
Send us Fan MailIn this episode, Mike and Alex break down CVE-2026-32622 — a critical vulnerability in the open-source SQLBot project that turned a helpful AI database assistant into a remote code execution pipeline through stored prompt injection.Topics covered include:How the three-flaw attack chain works: from Excel upload to prompt poisoning to PostgreSQL command executionWhy LLM-powered database tools create unique security risks that traditional defenses don't catchThe broader pattern of prompt injection vulnerabilities in enterprise AI applicationsOWASP's defense-in-depth framework for securing LLM integrationsPractical steps every organization should take: least privilege, input validation, output filtering, and structured promptsWhat this means for the future of AI-augmented developer and analyst tools