
7.9K
Downloads
67
Episodes
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live!
Youtube channel: https://www.youtube.com/LMGsecurity
Register Here: https://lmgsecurity.zoom.us/webinar/register/WN_4FpdxB0VQo6aURK1p7_k_g
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live!
Youtube channel: https://www.youtube.com/LMGsecurity
Register Here: https://lmgsecurity.zoom.us/webinar/register/WN_4FpdxB0VQo6aURK1p7_k_g
Episodes

3 hours ago
3 hours ago
Anthropic accidentally exposed the source code for its Claude Code CLI—and while no customer data or model weights were involved, the impacts are significant.
In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down what actually leaked, why the agent layer matters more than most people realize, and what happened next—including the rapid emergence of new open-source alternatives like Claw Code.
They also answer key questions from a client:
1. What risks should organizations be thinking about because of this leak?
2. Does this change how AI coding tools should be monitored?
3. What are some practical recommendations for educating end users and developers?
The conversation focuses on real-world impact: execution risk, supply chain exposure, and the growing need for governance around “vibe coding” tools.
Key Takeaways
1. Treat AI coding agents like controlled execution environments These tools can read files, execute commands, and modify code. Govern them like CI/CD or automation systems with constrained permissions and segmentation.
2. Assume attackers are studying this architecture right now The leak removes guesswork. Expect more targeted prompt injection and tool abuse as adversaries analyze how these systems behave internally.
3. Prioritize immediate risks: malicious repos and supply chain abuse Threat actors are already using this as a lure. Monitor for typosquatting, dependency confusion, and “leaked” tools distributing malware.
4. Ensure developers know what’s official—and what isn’t Make sure teams can distinguish between official tools and alternatives. If using open-source variants, vet the source, maintainers, and security model.
5.Take this as an opportunity to formalize AI governance for coding and development tools. Many organizations are still experimenting. Define policies, logging, and oversight now, especially around how these tools are approved and used.

No comments yet. Be the first to say something!