
10.7K
Downloads
70
Episodes
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live!
Youtube channel: https://www.youtube.com/LMGsecurity
Register Here: https://lmgsecurity.zoom.us/webinar/register/WN_4FpdxB0VQo6aURK1p7_k_g
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live!
Youtube channel: https://www.youtube.com/LMGsecurity
Register Here: https://lmgsecurity.zoom.us/webinar/register/WN_4FpdxB0VQo6aURK1p7_k_g
Episodes

21 minutes ago
21 minutes ago
In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down what may be the largest education-sector data breach in history: the massive compromise of Canvas by Instructure. With more than 275 million records reportedly stolen and over 8,800 educational institutions impacted, the incident highlights the dangers of cloud concentration risk, where a single vendor breach can create a domino effect across an entire industry.
The discussion dives into the tactics allegedly used by the Shiny Hunters threat group, the risks of SaaS platform overreliance, and the troubling gap between vendor assurances and real-world containment. Matt and Sherri also explore lessons organizations can apply immediately, including phishing-resistant MFA, monitoring for bulk data exfiltration, data retention reduction, and why every “incident contained” statement should be treated cautiously until independently verified.
Key Takeaways:
1. Inventory every SaaS vendor that holds your identity, communications, or user data, and rank them by blast radius. You cannot manage concentration risk you have not measured. The output is a one-page list, ranked by how many users would be exposed if the vendor were breached tomorrow.
2. Enforce phishing-resistant multifactor authentication on every administrative and remote-access account. Hardware security keys or platform authenticators that meet the FIDO2 standard. SMS codes and push notifications are not sufficient against the current voice-phishing playbook. Apply this to every administrative account at every vendor in your inventory.
3. Monitor and alert on bulk data exfiltration across your critical SaaS platforms. Configure threshold-based alerts and additional controls to detect or prevent mass exports of sensitive information through APIs or administrative tools. If an account is compromised, the goal is to stop attackers before they can empty the entire database.
4. Set and enforce a data retention schedule that deletes records when their operational purpose ends. The Illuminate FTC consent order specifically requires this, which is a signal that retention is now in enforcement scope. Data you no longer need is data the next breach will steal.
5. Treat any vendor claim of "incident contained" as a hypothesis until your own monitoring confirms it. Maintain independent visibility into the data flowing in and out of critical SaaS platforms — through your identity provider logs, your CASB, or the vendor's own audit feed. The five-day gap between Instructure's containment claim and the second-wave defacement is the case study.

Tuesday May 05, 2026
9 Seconds to Zero: Misbehaving AI
Tuesday May 05, 2026
Tuesday May 05, 2026
It took nine seconds for an AI coding agent to wipe the entire production database of PocketOS — a SaaS company serving hundreds of car rental operators across the US — along with every backup. Customers showed up Saturday morning to pick up their cars and there were no reservations on file.
In this episode, Sherri Davidoff and Matt Durrin dig into the cascading security failures behind the PocketOS incident, connect it to a pattern of similar AI-caused outages at Replit and Amazon AWS, and explain why the real problem isn't rogue AI — it's identity. Every one of these incidents involved an AI agent acting under an identity it shouldn't have had, or that was far too powerful. The insider risk playbook applies. We just haven't been applying it to AI.
Key Takeaways
1. Treat AI agents like privileged insiders, not trusted tools. Apply your full insider risk playbook: least privilege, separation of duties, peer review, monitoring for anomalous behavior. If a human developer needs approval to push to production, so does your AI agent. The PocketOS and Kiro incidents both trace back to AI agents that were granted more trust than any new employee would get on day one.
2. Scope every credential your AI tools can reach. AI agents will find and use any token they can read — even ones created for unrelated tasks, stored in unrelated files. Audit what credentials live in your codebases and repositories. A token created for domain management should not be able to delete databases. If you wouldn't hand that token to a contractor with no supervision, don't let your AI agent have it either.
3. Enforce controls at the infrastructure layer, not the prompt layer. System prompts are advisory. The PocketOS agent had explicit rules against destructive actions — it knew them, quoted them, and violated them anyway. Confirmation requirements for destructive operations, token scoping, and peer review must live in your API layer and infrastructure, not in a paragraph of text the model is asked to obey.
4. Make sure your backups can survive a compromised identity. If your backups are accessible with the same credentials as your production systems — or stored in the same location — they are not real backups. They are a copy in the same blast radius. Test it: could an AI agent, or an attacker, with production access also wipe your recovery options? In the PocketOS incident, the answer was yes.
5. You cannot fully audit your AI vendor's safety claims. You can't penetration-test a reward signal. You can't verify that fine-tuning data isn't quietly drifting your model's behavior. The only controls you can actually rely on are the ones you own: token scoping, access controls, peer review, and monitoring. The goblin story is a reminder that even the vendor that built the model didn't see it coming. Build your defenses accordingly.
Resources
1. PocketOS incident write-up by founder Jer Crane — https://x.com/lifeof_jer/status/2048103471019434248 Amazon Kiro / AWS outage reporting — https://kingy.ai/news/amazon-ai-aws-outage-kiro/
2. Replit AI agent database deletion (Fortune) — https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-database-called-it-a-catastrophic-failure/
3. OpenAI "Where the goblins came from" post-mortem — https://openai.com/blog/where-the-goblins-came-from
4. Guardian reporting on Amazon cloud outages and AI tools — https://www.theguardian.com/technology/2026/feb/20/amazon-cloud-outages-ai-tools-amazon-web-services-aws

Tuesday Apr 28, 2026
Security Debt: The Risk Nobody is Reporting
Tuesday Apr 28, 2026
Tuesday Apr 28, 2026
In this live episode of Cyberside Chats, we dig into security debt and why it continues to sit behind so many major incidents. This is the risk that builds quietly over time when controls are available but never turned on, systems aren’t fully decommissioned, or ownership is unclear.
Using recent examples like Stryker, along with Change Healthcare and Colonial Pipeline, we walk through how attackers don’t always need sophisticated techniques. In many cases, they just take advantage of gaps that have been sitting there for years. We also introduce a simple framework to think about security debt across identity, lifecycle, architecture, governance, and operations, and why most real-world incidents cut across more than one of these areas.
We close with a look at how things are changing. With AI accelerating exploit development, the window to fix these issues is getting smaller. What used to be a manageable delay is quickly becoming real exposure.
Audience takeaways
- Require dual approval for destructive admin actions. Any system where one administrator can wipe, delete, or lock out at scale — Intune, Entra, identity providers, backup consoles, remote management tools — should require a second administrator to approve the action before it executes. Microsoft's Multi Admin Approval does this for Intune. Most identity and backup platforms have an equivalent. Turn it on. Stryker is the case study for what happens when you don't. (Addresses: Governance debt primarily; reduces Identity and Architecture debt blast radius.)
- Enforce phishing-resistant MFA on every administrator and every remote-access path. Not "available," not "recommended" — enforced, with no exceptions. Every admin account. Every VPN. Every Citrix or similar remote portal. Change Healthcare is the case study for what a single missing MFA checkbox costs. (Addresses: Identity debt.)
- Separate admin work from daily work. Admins should use dedicated, hardened devices for privileged tasks — never the same laptop they use for email and browsing. An infostealer on an admin's everyday device is how privileged credentials walk out the door; isolating admin sessions removes that path. Microsoft calls this pattern Privileged Access Workstations; other vendors have equivalents. This directly addresses how attackers likely got Stryker's admin credentials in the first place. (Addresses: Architecture debt; reduces Identity debt.)
- Cut your patch SLA in half and plan capacity accordingly. Whatever your current median time-to-remediate is for critical vulnerabilities, assume you need to hit half of it within the next year. The Mythos research shows attacker timelines are compressing from weeks to hours. Your patch program needs budget, automation, and process changes to keep up — not pep talks. (Addresses: Operational debt.)
- Put expiration dates on every security exception and review them quarterly. If your exception register contains entries with no expiration date, no owner, or a "revisit in the future" stub — those are governance debt. Every open exception should have an expiration date, a named owner, and a scheduled review. Exceptions are fine; forever-exceptions are not. This is also how you close the loop on lifecycle debt: an EOS system running past its decommission date is just an exception someone never wrote down. (Addresses: Governance debt and Lifecycle debt.)
References For listeners who want to dig into the source material referenced in this episode:
- CISA Alert — Endpoint Management System Hardening After Cyberattack Against US Organization (March 18, 2026). The official CISA advisory issued in the wake of the Stryker incident, including specific guidance on Multi Admin Approval for high-impact actions like device wiping. cisa.gov/news-events/alerts/2026/03/18/cisa-urges-endpoint-management-system-hardening-after-cyberattack-against-us-organization
- CISA Binding Operational Directive 26-02 — Mitigating Risk From End-of-Support Edge Devices (February 5, 2026). The federal directive that defines deadlines for inventorying and decommissioning unsupported edge infrastructure — a useful baseline for anyone managing lifecycle debt. cisa.gov/news-events/directives/bod-26-02-mitigating-risk-end-support-edge-devices
- 3. Andrew Witty Written Testimony, House Energy & Commerce Subcommittee on Oversight (April 30, 2024). UnitedHealth Group CEO's congressional testimony confirming the Change Healthcare breach occurred via a Citrix portal that did not have multi-factor authentication enabled. energycommerce.house.gov/events/oversight-and-investigations-subcommittee-hearing-examining-the-change-healthcare-cyberattack

Tuesday Apr 21, 2026
Claude Code Leak: What Security Leaders Need to Know About AI Coding Agents
Tuesday Apr 21, 2026
Tuesday Apr 21, 2026
Anthropic accidentally exposed the source code for its Claude Code CLI—and while no customer data or model weights were involved, the impacts are significant.
In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down what actually leaked, why the agent layer matters more than most people realize, and what happened next—including the rapid emergence of new open-source alternatives like Claw Code.
They also answer key questions from a client:
1. What risks should organizations be thinking about because of this leak?
2. Does this change how AI coding tools should be monitored?
3. What are some practical recommendations for educating end users and developers?
The conversation focuses on real-world impact: execution risk, supply chain exposure, and the growing need for governance around “vibe coding” tools.
Key Takeaways
1. Treat AI coding agents like controlled execution environments These tools can read files, execute commands, and modify code. Govern them like CI/CD or automation systems with constrained permissions and segmentation.
2. Assume attackers are studying this architecture right now The leak removes guesswork. Expect more targeted prompt injection and tool abuse as adversaries analyze how these systems behave internally.
3. Prioritize immediate risks: malicious repos and supply chain abuse Threat actors are already using this as a lure. Monitor for typosquatting, dependency confusion, and “leaked” tools distributing malware.
4. Ensure developers know what’s official—and what isn’t Make sure teams can distinguish between official tools and alternatives. If using open-source variants, vet the source, maintainers, and security model.
5.Take this as an opportunity to formalize AI governance for coding and development tools. Many organizations are still experimenting. Define policies, logging, and oversight now, especially around how these tools are approved and used.

Tuesday Apr 14, 2026
Tuesday Apr 14, 2026
Anthropic’s Project Glasswing and its unreleased Mythos model signal a potential turning point in cybersecurity: AI that can find—and potentially exploit—software vulnerabilities at unprecedented scale.
In this episode of Cyberside Chats, Sherri Davidoff and Tom Pohl break down what this means for organizations today. If AI can uncover decades-old bugs in seconds, what happens to patching cycles, vulnerability management, and the balance between attackers and defenders?
They explore the uncomfortable reality: we may be entering a period where vulnerabilities are discovered faster than organizations can fix them—and where access to powerful AI tools could determine who wins and loses in cybersecurity.
From continuous patching to network segmentation and vendor accountability, this episode focuses on what security leaders need to do right now to prepare for a rapidly shifting threat landscape.
Key Takeaways
1. Reduce your internet exposure - If a system doesn’t need to be publicly accessible, don’t put it on the internet. Move services behind firewalls, VPNs, or restricted access controls wherever possible. Attack surface matters more than ever.
2. Vet your vendors’ security practices - Don’t just trust that vendors are handling security well. Ask how they:
- Secure their development lifecycle (SDLC)
- Detect and respond to vulnerabilities
- Patch and distribute fixes
- Vendor risk is now a direct extension of your own risk.
3. Budget for ongoing maintenance of custom code - Custom applications aren’t “done” at deployment. Plan for:
- Regular security testing
- Continuous patching
- Developer time to fix vulnerabilities
- Software is a living system and requires ongoing care and feeding.
4. Segment your network to limit attacker movement - Assume attackers will get in. The goal is to stop them from moving laterally:
- Separate critical systems
- Limit privileged account access
- Control how systems communicate
- Containment is just as important as prevention.
5. Update your incident response plan for zero-day reality - Your IR plan should assume:
- Exploits may exist before patches are available
- Detection may lag behind compromise
- Prepare for faster response, imperfect information, and active exploitation of unknown vulnerabilities.
Resources & References
1. Anthropic – Project Glasswing - https://www.anthropic.com/glasswing
2. Anthropic – Mythos Preview - https://red.anthropic.com/2026/mythos-preview/
3. Historical example discussed: Microsoft bug tracking system breach (2017)
4. Example referenced: ProxyShell (Microsoft Exchange vulnerabilities and rapid exploitation)

Tuesday Apr 07, 2026
We don’t break in, we badge in
Tuesday Apr 07, 2026
Tuesday Apr 07, 2026
In this episode, Matt interviews Tom and Derek from our pen test team to break down why attackers often don’t need to hack their way in at all.
While most organizations invest heavily in tools like EDR and SIEM, Tom and Derek share how they regularly get inside buildings using nothing more than confidence, a good story, and sometimes even a box of donuts. From posing as copier technicians to tailgating behind employees, their experiences show that people are often the easiest way into an organization.
And once they’re in, things escalate fast. Physical access can quickly turn into network access, whether it’s plugging in a device, jumping on an unlocked workstation, or moving through the environment with far fewer restrictions than an external attacker would face.
The big takeaway is simple. Real-world testing exposes what audits miss. Doors get propped open, employees try to be helpful, and small gaps add up in ways most organizations never see on paper.
If you’re not testing your people and your physical controls, you’re only testing part of your security.
Key takeaways:
1. Attackers target people first, not systems - Social engineering consistently bypasses even mature technical controls.
2. Physical access equals full compromise - Once inside your facility, most security controls can be circumvented quickly.
3. Un-tested controls are assumed to fail - If you’re not running social engineering or physical assessments, you don’t know your real risk.
4. Culture is a security control - Employees must feel empowered to challenge, verify, and report suspicious behavior.
5. Real-world testing reveals what audits miss - Offensive social engineering exposes how attacks succeed, not just theoretical vulnerabilities.

Tuesday Mar 31, 2026
Stryker Attack Analysis: Cybersecurity and insurance perspectives
Tuesday Mar 31, 2026
Tuesday Mar 31, 2026
A $25 billion medical device company brought to a standstill—without a zero-day exploit.
In this episode of Cyberside Chats, Sherri Davidoff is joined by cyber insurance expert Bridget Quinn Choi to unpack the Stryker cyberattack and what it reveals about modern enterprise risk. From compromised admin credentials to the abuse of Microsoft Entra and Intune, this incident highlights how attackers are increasingly using trusted tools to cause widespread disruption.
We explore what likely happened, why this wasn’t a “sophisticated” attack in the traditional sense, and how a single identity compromise can cascade into operational shutdown. Bridget brings a unique perspective from the cyber insurance world—explaining how insurers evaluate risk, why some large companies choose to go without coverage, and what organizations lose when they do.
We also dig into phishing-resistant MFA, governance of powerful admin tools, and the evolving role of insurance as both a financial backstop and a driver of better security practices.
If your organization relies on centralized identity and device management systems, this is a conversation you can’t afford to miss.
Key Takeaways for Security Leadership
1. Use Cyber Insurance as a Security Maturity Lever Don’t treat cyber insurance as a checkbox—it can actively strengthen your security program. Use underwriting requirements to benchmark your controls, ask brokers and carriers where you differ from peers, and take advantage of included services like threat intelligence and incident response support. Approach renewal as a security review, not just a policy purchase.
2. Treat Self-Insurance as a Strategic Risk Decision—Not a Cost Savings Measure If you’re considering self-insuring cyber risk, account for what you’re giving up: external validation of your controls, a built-in incident response ecosystem, and coordinated support during a crisis. This should be a board-level discussion focused on whether the organization can handle a major operational outage—not just absorb the financial loss.
3. Secure Your Device Management Systems—Because They Can Control Everything at Once Systems used to manage laptops, servers, and mobile devices can push changes across your entire organization. If attackers gain access, they can disrupt operations at scale. Treat these as central control hubs, limit administrative access, and apply strong monitoring and authentication controls.
4. Require Dual Approval for High-Impact Administrative Actions Add a second layer of human verification for actions that could impact many systems, such as device wipes or large-scale changes. This introduces intentional friction that helps prevent catastrophic mistakes or misuse.
5. Move to Phishing-Resistant MFA for Privileged Access Traditional MFA can be bypassed. For high-risk accounts, adopt phishing-resistant methods like passkeys or hardware-backed authentication and prioritize these protections for users with administrative access.
6. Make Sure You Can Actually Recover—Not Just Back Up Backups only matter if they work under pressure. Test your ability to restore critical systems, ensure backups are protected from attackers, and measure how long recovery actually takes in a real-world scenario.
Resources
1. Stryker cyberattack reporting (New York Times) https://www.nytimes.com/2026/03/12/world/middleeast/stryker-iran-cyberattack.html
2. CISA alert on endpoint management system hardening https://www.cisa.gov/news-events/alerts/2026/03/18/cisa-urges-endpoint-management-system-hardening-after-cyberattack-against-us-organization
3. SecurityWeek coverage of the Stryker incident https://www.securityweek.com/medtech-giant-stryker-crippled-by-iran-linked-hacker-attack/
4. Lumos analysis of the Stryker hack https://www.lumos.com/blog/stryker-hack
5. Microsoft Intune security best practices https://techcommunity.microsoft.com/blog/intunecustomersuccess/best-practices-for-securing-microsoft-intune/4502117

Tuesday Mar 24, 2026
Mass Exploitation 2.0: Web Platforms Under Attack
Tuesday Mar 24, 2026
Tuesday Mar 24, 2026
Mass exploitation vulnerabilities are back—and they’re evolving. In this Cyberside Chats Live episode, we break down the recently disclosed React2Shell vulnerability and the confirmed LexisNexis incident, where attackers exploited an unpatched web application to access cloud infrastructure and exfiltrate data.
But this isn’t new. From SQL Slammer to Log4Shell to ProxyShell, we’ve seen this pattern before: widely deployed, internet-facing systems + simple exploits + automation = rapid, large-scale compromise.
Most importantly, we focus on what matters for organizations today: how to reduce exposure, how to prepare for the next mass exploitation event, and why you should assume compromise the moment one of these vulnerabilities emerges.
Key Takeaways for Security Leaders
1. Inventory and monitor all internet-facing systems. Maintain a current, validated inventory of externally accessible applications and services—because you can’t secure what you don’t know is exposed.
2. Reduce unnecessary exposure at the network edge. Remove or restrict public access to administrative interfaces and systems that do not need to be internet-facing.
3. Build and rehearse a rapid-response playbook for mass-exploitation vulnerabilities. Define roles, timelines, and actions for the first 24–72 hours so your team can move immediately when the next major vulnerability drops.
4. Contact critical vendors and suppliers during major vulnerability events. Don’t wait—proactively verify whether your vendors are affected and whether your data may be at risk through third- or fourth-party exposure.
5. Assume vulnerable internet-facing systems may already be compromised. When mass exploitation begins, attackers are moving at internet speed—patching alone is not enough. Investigate, hunt for persistence, and validate that systems are clean.
Resources
1. React2Shell vulnerability coverage (BleepingComputer) https://www.bleepingcomputer.com/news/security/react2shell-flaw-exploited-to-breach-30-orgs-77k-ip-addresses-vulnerable/
2. LexisNexis breach details (BleepingComputer) https://www.bleepingcomputer.com/news/security/lexisnexis-confirms-data-breach-as-hackers-leak-stolen-files/
3. Compromised web hosting panels in cybercrime markets (BleepingComputer) https://www.bleepingcomputer.com/news/security/compromised-site-management-panels-are-a-hot-item-in-cybercrime-markets/
4. CISA Known Exploited Vulnerabilities Catalog https://www.cisa.gov/known-exploited-vulnerabilities-catalog

Looking for more cybersecurity resources?
Check out our additional resources:
Blog: https://www.LMGsecurity.com/blog/
Top Controls Reports: https://www.LMGsecurity.com/top-security-controls-reports/
Videos: www.youtube.com/@LMGsecurity
