About this episode
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM ? Full Show Notes https://www.microsoftinnovationpodcast.com/768 Agentic AI is transforming enterprise technology by moving beyond content generation to autonomous actions. In this episode of Copilot Show, Mehrnoosh Sameki explores the risks, guardrails, and governance frameworks needed to deploy AI agents safely and effectively. ?? What you’ll learn How agentic AI differs from generative AI and why it matters Key risks: task misalignment, prohibited actions, sensitive data leakage Practical guardrails and evaluation strategies for AI agents How to manage agent sprawl with Microsoft Foundry Control Plane Why red teaming and observability are critical for AI safety ? Highlights “Everything that I hear at work is about agentic AI.” “Agents don’t just output text or image. They take actions.” “Task alignment and staying on task is a huge one.” “Sensitive data leakage is more and more important.” “Bad actors could overwrite those information with different techniques.” “If you don’t know how many agents are out there, huge safety risk.” “We released something called Foundry Control Plane.” “Each agent gets a unique identity to suspend, quarantine, or stop.” “You can set org-wide policies against your agents.” “Red teaming is huge for identifying the risks.” “Our AI red teaming agent gives you a scorecard of vulnerabilities.” ?Mentioned Microsoft Foundry Control Plane: https://learn.microsoft.com/en-us/azure/ai-foundry/control-plane/overview Entra: https://www.microsoft.com/en-us/security/business/microsoft-entra Azure AI Foundry: https://azure.microsoft.com/en-us/products/ai-foundry Pyrit (open-source toolkit): https://github.com/Azure/PyRIT AI red teaming agent: https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/ai-red-teaming-agent OpenAI partnership: https://blogs.microsoft.com/blog/2025/01/21/microsoft-and-openai-evolve-partnership-to-drive-the-next-phase-of-ai Credo AI: https://www.credo.ai ?Keywords agentic