Safe AI Implementation
HomeEasy Prey › Episode

Safe AI Implementation

46:47 Apr 23, 2025
About this episode
Red models associated with AI technologies highlight real-world vulnerabilities and the importance of proactive security measures. It is vital to educate users about how to explore the challenges and keep AI systems secure. Today's guest is Dr. Aditya Sood. Dr. Sood is the VP of Security Engineering and AI Strategy at Aryaka and is a security practitioner, researcher, and consultant with more than 16 years of experience. He obtained his PhD in computer science from Michigan State University and has authored several papers for various magazines and journals. In this conversation, he will shed light on AI-driven threats, supply chain risks, and practical ways organizations can stay protected in an ever-changing environment. Get ready to learn how the latest innovations and evolving attack surfaces affect everyone from large companies to everyday users, and why a proactive mindset is key to staying ahead. Show Notes: [01:02] Dr. Sood has been working in the security industry for the last 17 years. He has a PhD from Michigan State University. Prior to Aryaka, he was a Senior Director of Threat Research and Security Strategy for the Office of the CTO at F5. [02:57] We discuss how security issues with AI are on the rise because of the recent popularity and increased use of AI. [04:18] The large amounts of data are convoluting how things are understood, the complexity is rising, and the threat model is changing. [05:14] We talk about the different AI attacks that are being encountered and how AI can be used to defend against these attacks. [06:00] Pre-trained models can contain vulnerabilities. [07:01] AI drift or model or concept drift is when data in the training sets is not updated. The data can be used in a different way. AI hallucinations also can create false output. [08:46] Dr. Sood explains several types of attacks that malicious actors are using. [10:07] Prompt injections are also a risk. [12:13] We learn about the injection mapping strategy. [13:54] We discuss the possibilities of using AI as a tool to bypass its own guardrails. [15:18] It's an arms race using AI to attack Ai and using AI to secure AI. [16:01] We discuss AI workload analysis. This helps to understand the way AI processes. This helps see the authorization boundary and the security controls that need to be enforced. [17:48] Being aware of the shadow AI running in the background. [19:38] Challe
Select an episode
0:00 0:00