
2.6K
Downloads
57
Episodes
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live! Our next session will be announced soon.
Episodes

15 hours ago
15 hours ago
AI is no longer a standalone tool—it is embedded directly into productivity platforms, collaboration systems, analytics workflows, and customer-facing applications. In this special CyberSide Chats episode, Sherri Davidoff and Matt Durrin break down why lack of visibility and control over AI has emerged as the first and most pressing top threat of 2026.
Using real-world examples like the EchoLeak zero-click vulnerability in Microsoft 365 Copilot, the discussion highlights how AI can inherit broad, legitimate access to enterprise data while operating outside traditional security controls. These risks often generate no alerts, no indicators of compromise, and no obvious “incident” until sensitive data has already been exposed or misused.
Listeners will walk away with a practical framework for understanding where AI risk hides inside modern environments—and concrete steps security and IT teams can take to centralize AI usage, regain visibility, govern access, and apply long-standing security principles to this rapidly evolving attack surface.
Key Takeaways
1. Centralize AI usage across the organization. Require a clear, centralized process for approving AI tools and enabling new AI features, including those embedded in existing SaaS platforms.
2. Gain visibility into AI access and data flows. Inventory which AI tools, agents, and features are in use, which users interact with them, and what data sources they can access or influence.
3. Restrict and govern AI usage based on data sensitivity. Align AI permissions with data classification, restrict use for regulated or highly sensitive data sets, and integrate AI considerations into vendor risk management.
4. Apply the principle of least privilege to AI systems. Treat AI like any other privileged entity by limiting access to only what is necessary and reducing blast radius if credentials or models are misused.
5. Evaluate technical controls designed for AI security. Consider emerging solutions such as AI gateways that provide enforcement, logging, and observability for prompts, responses, and model access.
Resources
1. Microsoft Digital Defense Report 2025
2. NIST AI Risk Management Framework
https://www.nist.gov/itl/ai-risk-management-framework
3. Microsoft 365 Copilot Zero-Click AI Vulnerability (EchoLeak)
https://www.infosecurity-magazine.com/news/microsoft-365-copilot-zeroclick-ai/
4. Adapting to AI Risks: Essential Cybersecurity Program Updates.
https://www.LMGsecurity.com/resources/adapting-to-ai-risks-essential-cybersecurity-program-updates/
5. Microsoft on Agentic AI and Embedded Automation (2026)

No comments yet. Be the first to say something!