
3.4K
Downloads
63
Episodes
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live! Our next session will be announced soon.
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live! Our next session will be announced soon.
Episodes

Tuesday Aug 05, 2025
The Amazon Q AI Hack: A Wake-Up Call for Developer Tool Security
Tuesday Aug 05, 2025
Tuesday Aug 05, 2025
A silent compromise, nearly a million developers affected, and no one at Amazon knew for six days. In this episode of Cyberside Chats, we’re diving into the Amazon Q AI Hack, a shocking example of how vulnerable our software development tools have become.
Join hosts Sherri Davidoff and Matt Durrin as they unpack how a misconfigured GitHub token allowed a hacker to inject destructive AI commands into a popular developer tool. We’ll walk through exactly what happened, how GitHub security missteps enabled the attack, and why this incident is a critical wake-up call for supply chain security and AI tool governance.
We’ll also spotlight other supply chain breaches like the SolarWinds Orion backdoor and XZ Utils compromise, plus AI tool mishaps where “helpful” assistants caused real-world damage. If your organization uses AI developer tools—or works with third-party software vendors—this episode is a must-listen.
Key Takeaways:
▪ Don’t Assume AI Tools Are Safe Just Because They’re Popular
Amazon Q had nearly a million installs—and it still shipped with malicious code. Before adopting any AI-based tools (like Copilot, Q, or Gemini), vet their permissions, access scope, and how updates are managed.
▪ Ask Your Software Vendors About Their Supply Chain Security
If you rely on third-party developers or vendors, request details on how they manage build pipelines, review code changes, and prevent unauthorized commits. A compromised vendor can put your entire environment at risk.
▪ Hold Vendors Accountable for Secure Development Practices
Ask whether your vendors enforce commit signing, use GitHub security features (like push protection and secret scanning), and apply multi-person code review processes. If they can't answer, that's a red flag.
▪ Be Wary of Giving AI Assistants Too Much Access
Whether it’s an AI chatbot that can write config files or a developer tool that interacts with production environments, limit access. Always sandbox and monitor AI-integrated tools, and avoid letting them make direct changes.
▪ Prepare to Hear About Breaches From the Outside
Just like Amazon only found out about the malicious code in Q after security researchers reported it, many organizations won’t catch third-party security issues internally. Make sure you have monitoring tools, vendor communication protocols, and incident response processes in place.
▪ If You Develop Code Internally, Lock Down Your Build Pipeline
The Amazon Q hack happened because of a misconfigured GitHub token in a CI workflow. If you’re building your own code, review permissions on GitHub tokens, enforce branch protections, and require signed commits to prevent unauthorized changes from slipping into production.
#Cybersecurity #SupplyChainSecurity #AItools #DevSecOps #AmazonQHack #GitHubSecurity #Infosec #CybersideChats #LMGSecurity

Tuesday Jul 29, 2025
Iran’s Cyber Surge: Attacks Intensify in 2025
Tuesday Jul 29, 2025
Tuesday Jul 29, 2025
Iranian cyber operations have sharply escalated in 2025, targeting critical infrastructure, defense sectors, and global businesses—especially those linked to Israel and the U.S. From destructive malware and coordinated DDoS attacks to sophisticated hack-and-leak campaigns leveraging generative AI, Iranian threat actors are rapidly evolving. Join us to explore their latest tactics, notable incidents, and essential strategies to defend your organization.
Hosts Sherri Davidoff and Matt Durrin break down wiper malware trends, AI-powered phishing, the use of deepfakes for psychological operations, and the critical role of patching and MFA in protecting against collateral damage.
Key Takeaways for Cybersecurity Leaders
- Patch Internet-Facing Systems Promptly: Iranian attackers frequently exploit unpatched systems—especially VPNs, SharePoint, and other perimeter-facing tools. Microsoft’s July Patch Tuesday alone included 137 vulnerabilities, including actively exploited zero-days. Stay current to avoid being an easy target.
- Implement Phishing-Resistant Multifactor Authentication (MFA): Groups like Charming Kitten are leveraging generative AI to craft convincing spear phishing emails. Use MFA methods such as FIDO2 security keys, biometrics, or passkeys. Avoid push fatigue, SMS codes, or email-based MFA which are easily phished or bypassed.
- Segment and Secure Critical IT & OT Systems: Assume attackers will get in. Segment IT from OT networks (especially SCADA/ICS environments) and limit lateral movement. Iranian campaigns have crossed into OT, targeting backups and sabotaging ICS operations.
- Maintain Robust, Tested Backup and Recovery Systems: Wiper malware and ransomware deployed by Iranian groups have destroyed both live data and backups. Use immutable or offline backups, and test full restores. Automate reimaging processes to ensure rapid recovery at scale.
- Raise Awareness Against Sophisticated Social Engineering: Train staff to recognize AI-generated phishing and deepfake audio/video attacks. Iran has used deepfakes to spread disinformation and influence public perception. Show your team what deepfakes look and sound like so they can spot them in the wild.
Resources & References
CISA/FBI/NSA Joint Advisory: https://www.cisa.gov/sites/default/files/2025-06/joint-fact-sheet-Iranian-cyber-actors-may-target-vulnerable-US-networks-and-entities-of-interest-508c-1.pdf
Unit 42 Report: https://unit42.paloaltonetworks.com/iranian-cyberattacks-2025/
Deepwatch Threat Intel: https://www.deepwatch.com/labs/customer-advisory-elevated-iranian-cyber-activity-post-u-s-strikes/
LMG Security – Defending Against Generative AI Attacks: https://lmgsecurity.com/defend-against-generative-ai-attacks/
#cybersecurity #cybercrime #cyberattack #cyberaware #cyberthreats #ciso #itsecurity #infosec #infosecurity #riskmanagement

Tuesday Jul 22, 2025
Leaked and Loaded: DOGE’s API Key Crisis
Tuesday Jul 22, 2025
Tuesday Jul 22, 2025
On July 13, 2025, a developer at the Department of Government Efficiency—DOGE—accidentally pushed a private xAI API key to GitHub. That key unlocked access to 52 unreleased LLMs, including Grok‑4‑0709, and remained active long after discovery.
In this episode of Cyberside Chats, we examine how a single leaked credential became a national-level risk—and how it mirrors broader API key exposures at BeyondTrust and across GitHub. LMG Security’s Director of Penetration Testing, Tom Pohl, shares red team insights on how embedded secrets give attackers a foothold—and what CISOs must do now to reduce their exposure.
Key Takeaways:
- Treat leaked API keys like a full-blown incident—whether it’s your code or a vendor’s.
Monitor for exposure and misuse. Include secrets in IR playbooks—even when it’s third-party code.
- Ask your vendors the hard questions about secrets management.
Do they rotate keys? Use a secrets manager? How quickly can they revoke?
- Scan your environment for exposed secrets, even if you don’t develop software.
Look for credentials in cloud configs, automation, scripts, SaaS tools.
- Make sure your penetration testing team searches for secrets as part of their processes.
Secrets can show up in unexpected places—firmware, config files, build artifacts. Your red team or vendor should actively hunt for exposed keys, hardcoded credentials, and reused certs across applications, infrastructure, and third-party tools.
- Train your IT staff and developers to remove secrets from code and automate detection.
Use GitGuardian, TruffleHog, and a secrets manager like AWS Secrets Manager or HashiCorp Vault.
References:
- Exposed Secrets, Broken Trust: What the DOGE API Key Leak Teaches Us About Software Security – LMG Security: https://www.LMGsecurity.com/exposed-secrets-broken-trust-what-the-doge-api-key-leak-teaches-us-about-software-security/
- "Private Keys in Public Places” - DEFCON talk by Tom Pohl, LMG Security: https://www.youtube.com/watch?v=7t_ntuSXniw
- DOGE employee leaks private xAI API key from sensitive database – TechRadar: https://www.techradar.com/pro/security/doge-employee-with-sensitive-database-access-leaks-private-xai-api-key
#DOGEleak #cybersecurity #cybersecurityawareness #ciso #infosec #itsecurity

Tuesday Jul 15, 2025
Holiday Horror Stories: Why Hackers Love Long Weekends
Tuesday Jul 15, 2025
Tuesday Jul 15, 2025
Why do so many major cyberattacks happen over holiday weekends? In this episode, Sherri and Matt share their own 4th of July anxiety as security professionals—and walk through some of the most infamous attacks timed to exploit long weekends, including the Kaseya ransomware outbreak, the MOVEit breach, and the Bangladesh Bank heist. From retail breaches around Thanksgiving to a cyber hit on Krispy Kreme, they break down what makes holidays such a juicy target—and how to better defend your organization when most of your team is off the clock.
Takeaways:
- Treat Holiday Weekends as Elevated Threat Windows
Plan and staff accordingly. Threat actors deliberately strike when visibility and response capacity are lowest—your incident response posture should reflect that heightened risk. - Establish and Test Off-Hours Response Plans
Ensure escalation paths, contact protocols, and technical procedures are defined, reachable, and tested for weekends and holidays. On-call responsibilities should be clearly assigned with appropriate backups. - Reduce Your Attack Surface and Harden Perimeter Before the Break
Conduct targeted patching, vulnerability scans, and privilege reviews in the days leading up to any holiday period. Temporarily disable or restrict non-essential access and remote administration rights. - Practice Incident Response Tabletop Exercises With Holiday Timing in Mind
Simulate scenarios that unfold over weekends or during staff absences to uncover timing-based gaps in coverage, decision-making, or escalation. Make sure playbooks account for limited availability and stress-test your team’s ability to respond under real-world holiday constraints. - Communicate Expectations Across the Organization and With 3rd Parties
Brief relevant teams (not just security) on the increased risk. Reinforce secure behaviors, clarify how to report suspicious activity, and keep business units informed about potential delays or escalation protocols. Talk with your MSP and other 3rd party vendors to ensure they have consistent monitoring and know who to contact if there is an incident (and vice versa).
Resources:
- MOVEit Data Breach Timeline – Rapid7
- Kaseya Ransomware Attack Explained – Varonis
- Bangladesh Bank Heist – Darknet Diaries Episode 72
- Tabletop Exercises & Incident Response Planning – LMG Security
#cybersecurity #dfir #incidentresponse #ciso #cybersidechats #cybersecurityleadership #infosec #itsecurity #cyberaware

Tuesday Jul 08, 2025
Federal Cybersecurity Rollbacks: What Got Cut—And What Still Stands
Tuesday Jul 08, 2025
Tuesday Jul 08, 2025
In June 2025, the White House issued an executive order that quietly eliminated several key federal cybersecurity requirements. In this episode of Cyberside Chats, Sherri and Matt break down exactly what changed—from the removal of secure software attestations to the rollback of authentication requirements—and what remains in place, including post-quantum encryption support and the FTC’s Cyber Trust Mark. We’ll talk about the practical impact for security leaders, why this mirrors past challenges like PCI compliance, and what your organization should do next.
Key Takeaways (for CISOs and Security Leaders)
- Don’t Drop SBOMs or Attestations — Build Them Into Contracts Anyway
Even without a federal requirement, insist on SBOMs and secure development attestations in vendor agreements. Transparency reduces your risk. - Re-Evaluate Third-Party Software Risk Practices Now
With no centralized validation, it's up to you to verify vendors' claims. Strengthen your third-party risk management processes accordingly. - Watch for Gaps in MFA, Encryption, and Identity Standards
Don’t assume basic protections are baked in. Federal rollback may signal declining baseline expectations—so enforce your own. - Prepare for Industry-Led Enforcement — From Insurers, Buyers, and Info-Sharing Groups
Expect cyber insurers, large enterprises, ISACs/ISAOs, and professional groups to lead on software transparency. Get ahead by aligning now.
Resources:
- Full Text of the June 6, 2025 Executive Order: https://www.whitehouse.gov/presidential-actions/2025/06/sustaining-select-efforts-to-strengthen-the-nations-cybersecurity-and-amending-executive-order-13694-and-executive-order-14144
- LMG Security: Software Supply Chain Security – Understanding and Mitigating Major Risks: https://www.lmgsecurity.com/software-supply-chain-security-understanding-and-mitigating-major-risks/
- The Record’s Breakdown: Trump Order Rolls Back Key Federal Cybersecurity Rules: https://therecord.media/trump-cybersecurity-executive-order-june-2025

Tuesday Jul 01, 2025
No Lock, Just Leak
Tuesday Jul 01, 2025
Tuesday Jul 01, 2025
Forget everything you thought you knew about ransomware. Today’s threat actors aren’t locking your files—they’re stealing your data and threatening to leak it unless you pay up.
In this episode, we dive into the rise of data-only extortion campaigns and explore why encryption is becoming optional for cybercriminals. From real-world trends like the rebrand of Hunters International to “World Leaks,” to the strategic impact on insurance, PR, and compliance—this is a wake-up call for security teams everywhere.
If your playbook still ends with “just restore from backup,” you’re not ready.
Takeaways for Security Teams:
- Rethink detection: Focus on exfiltration, not just malware.
- Update tabletop exercises: Include public leaks, media scrutiny, and regulatory responses.
- Review insurance policies: Ensure data-only extortion is covered, not just encryption events.
- Prepare execs and PR: Modern extortion targets reputation and compliance pressure points.
Resources & Mentions:
- https://www.coveware.com/ransomware-quarterly-reports
- https://attack.mitre.org/resources/

Tuesday Jun 24, 2025
The AI Insider Threat: EchoLeak and the Rise of Zero-Click Exploits
Tuesday Jun 24, 2025
Tuesday Jun 24, 2025
Can your AI assistant become a silent data leak? In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down EchoLeak, a zero-click exploit in Microsoft 365 Copilot that shows how attackers can manipulate AI systems using nothing more than an email. No clicks. No downloads. Just a cleverly crafted message that turns your AI into an unintentional insider threat.
They also share a real-world discovery from LMG Security’s pen testing team: how prompt injection was used to extract system prompts and override behavior in a live web application. With examples ranging from corporate chatbots to real-world misfires at Samsung and Chevrolet, this episode unpacks what happens when AI is left untested—and why your security strategy must adapt.
Key Takeaways
- Limit and review the data sources your LLM can access—ensure it doesn’t blindly ingest untrusted content like inbound email, shared docs, or web links.
- Audit AI integrations for prompt injection risks—treat language inputs like code and include them in standard threat models.
- Add prompt injection testing to every web app and email flow assessment, even if you’re using trusted APIs or cloud-hosted models.
- Red-team your LLM tools using subtle, natural-sounding prompts—not just obvious attack phrases.
- Monitor and restrict outbound links from AI-generated content, and validate any use of CSP-approved domains like Microsoft Teams.
Resources
#EchoLeak #Cybersecurity #Cyberaware #CISO #Microsoft #Microsoft365 #Copilot #AI #GenAI #AIsecurity #RiskManagement

Tuesday Jun 17, 2025
When AI Goes Rogue: Blackmail, Shutdowns, and the Rise of High-Agency Machines
Tuesday Jun 17, 2025
Tuesday Jun 17, 2025
What happens when your AI refuses to shut down—or worse, tries to blackmail you to stay online?
Join us for a riveting Cyberside Chats Live as we dig into two chilling real-world incidents: one where OpenAI’s newest model bypassed shutdown scripts during testing, and another where Anthropic’s Claude Opus 4 wrote blackmail messages and threatened users in a disturbing act of self-preservation. These aren’t sci-fi hypotheticals—they’re recent findings from leading AI safety researchers.
We’ll unpack:
- The rise of high-agency behavior in LLMs
- The shocking findings from Apollo Research and Anthropic
- What security teams must do to adapt their threat models and controls
- Why trust, verification, and access control now apply to your AI
This is essential listening for CISOs, IT leaders, and cybersecurity professionals deploying or assessing AI-powered tools.
Key Takeaways
- Restrict model access using role-based controls.
Limit what AI systems can see and do—apply the principle of least privilege to prompts, data, and tool integrations.
- Monitor and log all AI inputs and outputs.
Treat LLM interactions like sensitive API calls: log them, inspect for anomalies, and establish retention policies for auditability.
- Implement output validation for critical tasks.
Don’t blindly trust AI decisions—use secondary checks, hashes, or human review for rankings, alerts, or workflow actions.
- Deploy kill-switches outside of model control.
Ensure that shutdown or rollback functions are governed by external orchestration—not exposed in the AI’s own prompt space or toolset.
- Add AI behavior reviews to your incident response and risk processes.
Red team your models. Include AI behavior in tabletop exercises. Review logs not just for attacks on AI, but misbehavior by AI.
Resources
#AI #GenAI #CISO #Cybersecurity #Cyberaware #Cyber #Infosec #ITsecurity #IT #CEO #RiskManagement

Looking for more cybersecurity resources?
Check out our additional resources:
Blog: https://www.LMGsecurity.com/blog/
Top Controls Reports: https://www.LMGsecurity.com/top-security-controls-reports/
Videos: www.youtube.com/@LMGsecurity
