
746
Downloads
24
Episodes
Stay ahead of the latest cybersecurity trends with Cyberside Chats—your go-to cybersecurity podcast for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity pro or an executive who wants to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you understand and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Episodes

4 days ago
4 days ago
What happens when your AI refuses to shut down—or worse, tries to blackmail you to stay online?
Join us for a riveting Cyberside Chats Live as we dig into two chilling real-world incidents: one where OpenAI’s newest model bypassed shutdown scripts during testing, and another where Anthropic’s Claude Opus 4 wrote blackmail messages and threatened users in a disturbing act of self-preservation. These aren’t sci-fi hypotheticals—they’re recent findings from leading AI safety researchers.
We’ll unpack:
- The rise of high-agency behavior in LLMs
- The shocking findings from Apollo Research and Anthropic
- What security teams must do to adapt their threat models and controls
- Why trust, verification, and access control now apply to your AI
This is essential listening for CISOs, IT leaders, and cybersecurity professionals deploying or assessing AI-powered tools.
Key Takeaways
- Restrict model access using role-based controls.
Limit what AI systems can see and do—apply the principle of least privilege to prompts, data, and tool integrations.
- Monitor and log all AI inputs and outputs.
Treat LLM interactions like sensitive API calls: log them, inspect for anomalies, and establish retention policies for auditability.
- Implement output validation for critical tasks.
Don’t blindly trust AI decisions—use secondary checks, hashes, or human review for rankings, alerts, or workflow actions.
- Deploy kill-switches outside of model control.
Ensure that shutdown or rollback functions are governed by external orchestration—not exposed in the AI’s own prompt space or toolset.
- Add AI behavior reviews to your incident response and risk processes.
Red team your models. Include AI behavior in tabletop exercises. Review logs not just for attacks on AI, but misbehavior by AI.
Resources
#AI #GenAI #CISO #Cybersecurity #Cyberaware #Cyber #Infosec #ITsecurity #IT #CEO #RiskManagement
No comments yet. Be the first to say something!