
768
Downloads
25
Episodes
Stay ahead of the latest cybersecurity trends with Cyberside Chats—your go-to cybersecurity podcast for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity pro or an executive who wants to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you understand and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Episodes

11 hours ago
11 hours ago
Can your AI assistant become a silent data leak? In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down EchoLeak, a zero-click exploit in Microsoft 365 Copilot that shows how attackers can manipulate AI systems using nothing more than an email. No clicks. No downloads. Just a cleverly crafted message that turns your AI into an unintentional insider threat.
They also share a real-world discovery from LMG Security’s pen testing team: how prompt injection was used to extract system prompts and override behavior in a live web application. With examples ranging from corporate chatbots to real-world misfires at Samsung and Chevrolet, this episode unpacks what happens when AI is left untested—and why your security strategy must adapt.
Key Takeaways
- Limit and review the data sources your LLM can access—ensure it doesn’t blindly ingest untrusted content like inbound email, shared docs, or web links.
- Audit AI integrations for prompt injection risks—treat language inputs like code and include them in standard threat models.
- Add prompt injection testing to every web app and email flow assessment, even if you’re using trusted APIs or cloud-hosted models.
- Red-team your LLM tools using subtle, natural-sounding prompts—not just obvious attack phrases.
- Monitor and restrict outbound links from AI-generated content, and validate any use of CSP-approved domains like Microsoft Teams.
Resources
#EchoLeak #Cybersecurity #Cyberaware #CISO #Microsoft #Microsoft365 #Copilot #AI #GenAI #AIsecurity #RiskManagement
No comments yet. Be the first to say something!