
2.6K
Downloads
57
Episodes
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live! Our next session will be announced soon.
Episodes

52 minutes ago
Top Threat of 2026: The AI Visibility and Control Gap
52 minutes ago
52 minutes ago
AI is no longer a standalone tool—it is embedded directly into productivity platforms, collaboration systems, analytics workflows, and customer-facing applications. In this special CyberSide Chats episode, Sherri Davidoff and Matt Durrin break down why lack of visibility and control over AI has emerged as the first and most pressing top threat of 2026.
Using real-world examples like the EchoLeak zero-click vulnerability in Microsoft 365 Copilot, the discussion highlights how AI can inherit broad, legitimate access to enterprise data while operating outside traditional security controls. These risks often generate no alerts, no indicators of compromise, and no obvious “incident” until sensitive data has already been exposed or misused.
Listeners will walk away with a practical framework for understanding where AI risk hides inside modern environments—and concrete steps security and IT teams can take to centralize AI usage, regain visibility, govern access, and apply long-standing security principles to this rapidly evolving attack surface.
Key Takeaways
1. Centralize AI usage across the organization. Require a clear, centralized process for approving AI tools and enabling new AI features, including those embedded in existing SaaS platforms.
2. Gain visibility into AI access and data flows. Inventory which AI tools, agents, and features are in use, which users interact with them, and what data sources they can access or influence.
3. Restrict and govern AI usage based on data sensitivity. Align AI permissions with data classification, restrict use for regulated or highly sensitive data sets, and integrate AI considerations into vendor risk management.
4. Apply the principle of least privilege to AI systems. Treat AI like any other privileged entity by limiting access to only what is necessary and reducing blast radius if credentials or models are misused.
5. Evaluate technical controls designed for AI security. Consider emerging solutions such as AI gateways that provide enforcement, logging, and observability for prompts, responses, and model access.
Resources
1. Microsoft Digital Defense Report 2025
2. NIST AI Risk Management Framework
https://www.nist.gov/itl/ai-risk-management-framework
3. Microsoft 365 Copilot Zero-Click AI Vulnerability (EchoLeak)
https://www.infosecurity-magazine.com/news/microsoft-365-copilot-zeroclick-ai/
4. Adapting to AI Risks: Essential Cybersecurity Program Updates.
https://www.LMGsecurity.com/resources/adapting-to-ai-risks-essential-cybersecurity-program-updates/
5. Microsoft on Agentic AI and Embedded Automation (2026)

Tuesday Jan 27, 2026
The Verizon Outage and the Cost of Concentration
Tuesday Jan 27, 2026
Tuesday Jan 27, 2026
The recent Verizon outage underscores a growing risk in today’s technology landscape: when critical services are concentrated among a small number of providers, failures don’t stay isolated.
In this live discussion, we’ll connect the Verizon outage to past telecom and cloud disruptions to examine how infrastructure dependency creates cascading business impact. We’ll also explore how large-scale outages intersect with security threats targeting telecommunications, where availability, confidentiality, and integrity failures increasingly overlap.
The session will close with actionable takeaways for strengthening resilience and risk planning across cybersecurity and IT programs.
Key Takeaways
1. Diversify your technology infrastructure. Relying on a single carrier, cloud provider, or bundled service creates a single point of failure. Purposeful diversification across providers can reduce the impact of large-scale outages and improve overall resilience.
2. Treat outages as security incidents, not just reliability problems. Large-scale telecom and cloud outages directly disrupt authentication, monitoring, and incident response, and should trigger security workflows—not just IT troubleshooting.
3. Identify and document your dependencies on carriers and cloud providers. Many security controls rely on SMS, voice, cloud identity, or single regions; understanding these dependencies ahead of time prevents dangerous blind spots during outages.
4. Plan and test incident response without phones, SMS, or primary cloud access. Assume your normal communication and authentication methods will fail and ensure your teams know how to coordinate securely when core services are unavailable.
5. Expect outages to increase fraud and social engineering activity. Attackers exploit confusion and urgency during service disruptions, so security teams should prepare staff for impersonation and “service restoration” scams during major outages.
6. Use widespread outages as learning opportunities. Review what happened, assess how your organization was—or could have been—impacted, identify potential areas for improvement, and update incident response, communications, and resilience plans accordingly.
Resources
1. Verizon official network outage update https://www.verizon.com/about/news/update-network-outage
2. Forrester: Verizon outage reignites reliability concerns https://www.forrester.com/blogs/verizon-outage-reignites-reliability-concerns/
3. CNN: Verizon outage disrupted phone and internet service nationwide https://www.cnn.com/2026/01/15/tech/verizon-outage-phone-internet-service
4. AP News: Verizon outage disrupted calling and data services nationwide https://apnews.com/article/85d658a4fb6a6175cae8981d91a809c9
5. CNN: AT&T outage shows how dependent daily life has become on mobile networks (2024) https://www.cnn.com/2024/02/23/tech/att-outage-customer-service

Tuesday Jan 20, 2026
Tuesday Jan 20, 2026
The FTC has issued an order against General Motors for collecting and selling drivers’ precise location and behavior data, gathered every few seconds and marketed as a safety feature. That data was sold into insurance ecosystems and used to influence pricing and coverage decisions — a clear reminder that how organizations collect, retain, and share data now carries direct security, regulatory, and financial risk.
In this episode of Cyberside Chats, we explain why the GM case matters to CISOs, cybersecurity leaders, and IT teams everywhere. Data proliferation doesn’t just create privacy exposure; it creates systemic risk that fuels identity abuse, authentication bypass, fake job applications, and deepfake campaigns across organizations. The message is simple: data is hazardous material, and minimizing it is now a core part of cybersecurity strategy.
Key Takeaways:
1. Prioritize data inventory and mapping in 2026
You cannot assess risk, select controls, or meet regulatory obligations without knowing what data you have, where it lives, how it flows, and why it is retained.
2. Reduce data to reduce risk
Data minimization is a security control that lowers breach impact, compliance burden, and long-term cost.
3. Expect that regulators care about data use, not just breaches
Enforcement increasingly targets over-collection, secondary use, sharing, and retention even when no breach occurs.
4. Create and actively use a data classification policy
Classification drives retention, access controls, monitoring, and protection aligned to data value and regulatory exposure.
5. Design identity and recovery assuming personal data is already compromised
Build authentication and recovery flows that do not rely on the secrecy of SSNs, dates of birth, addresses, or other static personal data.
6. Train teams on data handling, not just security tools
Ensure engineers, IT staff, and business teams understand what data can be collected, how long it can be retained, where it may be stored, and how it can be shared.
Resources:
1. California Privacy Protection Agency — Delete Request and Opt-Out Platform (DROP)
2. FTC Press Release — FTC Takes Action Against General Motors for Sharing Drivers’ Precise Location and Driving Behavior Data
3. California Delete Act (SB 362) — Overview
https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB362
4. Texas Attorney General — Data Privacy Enforcement Actions
https://www.texasattorneygeneral.gov/news/releases
5. Data Breaches by Sherri Davidoff
https://www.amazon.com/Data-Breaches-Opportunity-Sherri-Davidoff/dp/0134506782

Tuesday Jan 13, 2026
Venezuela’s Blackout: Cybercrime Domino Effect
Tuesday Jan 13, 2026
Tuesday Jan 13, 2026
When Venezuela experienced widespread power and internet outages, the impact went far beyond inconvenience—it created a perfect environment for cyber exploitation.
In this episode of Cyberside Chats, we use Venezuela’s disruption as a case study to show how cyber risk escalates when power, connectivity, and trusted services break down. We examine why phishing, fraud, and impersonation reliably surge after crises, how narratives around cyber-enabled disruption can trigger copycat or opportunistic attacks, and why even well-run organizations resort to risky security shortcuts when normal systems fail.
We also explore how attackers weaponize emergency messaging, impersonate critical infrastructure and connectivity providers, and exploit verification failures when standard workflows are disrupted. The takeaway is simple: when infrastructure collapses, trust erodes—and cybercrime scales quickly to fill the gap.

Tuesday Jan 06, 2026
What the Epstein Files Teach Us About Redaction and AI
Tuesday Jan 06, 2026
Tuesday Jan 06, 2026
The December release of the Epstein files wasn’t just controversial—it exposed a set of security problems organizations face every day. Documents that appeared heavily redacted weren’t always properly sanitized. Some files were pulled and reissued, drawing even more attention. And as interest surged, attackers quickly stepped in, distributing malware and phishing sites disguised as “Epstein archives.”
In this episode of Cyberside Chats, we use the Epstein files as a real-world case study to explore two sides of the same problem: how organizations can be confident they’re not releasing more data than intended, and how they can trust—or verify—the information they consume under pressure. We dig into redaction failures, how AI tools change the risk model, how attackers weaponize breaking news, and practical ways teams can authenticate data before reacting.

Tuesday Dec 30, 2025
Amazon's Warning: The New Reality of Initial Access
Tuesday Dec 30, 2025
Tuesday Dec 30, 2025
Amazon released two security disclosures in the same week — and together, they reveal how modern attackers are getting inside organizations without breaking in.
One case involved a North Korean IT worker who entered Amazon’s environment through a third-party contractor and was detected through subtle behavioral anomalies rather than malware. The other detailed a years-long Russian state-sponsored campaign that shifted away from exploits and instead abused misconfigured edge devices and trusted infrastructure to steal and replay credentials.
Together, these incidents show how nation-state attackers are increasingly blending into human and technical systems that organizations already trust — forcing defenders to rethink how initial access really happens going into 2026.
Key Takeaways
1. Treat hiring and contractors as part of your attack surface.
Nation-state actors are deliberately targeting IT and technical roles. Contractor onboarding, identity verification, and access scoping should be handled with the same rigor as privileged account provisioning.
2. Secure and monitor network edge devices as identity infrastructure
Misconfigured edge devices have become a primary initial access vector. Inventory them, assign ownership, restrict management access, and monitor them like authentication systems — not just networking gear.
3. Enforce strong MFA everywhere credentials matter
If credentials can be used without MFA, assume they will be abused. Require MFA on VPNs, edge device management interfaces, cloud consoles, SaaS admin portals, and internal administrative access.
4. Harden endpoints and validate how access actually occurs
Endpoint security still matters. Harden devices and look for signs of remote control, unusual latency, or access paths that don’t match how work is normally done.
5. Shift detection from “malicious” to “out of place”
The most effective attacks often look legitimate. Focus detection on behavioral mismatches — access that technically succeeds but doesn’t align with role, geography, timing, or expected workflow.
Resources:
1. Amazon Threat Intelligence Identifies Russian Cyber Threat Group Targeting Western Critical Infrastructure
2. Amazon Caught North Korean IT Worker by Tracing Keystroke Data
3. North Korean Infiltrator Caught Working in Amazon IT Department Thanks to Keystroke Lag
4. Confessions of a Laptop Farmer: How an American Helped North Korea’s Remote Worker Scheme
5. Hiring security checklist
https://www.lmgsecurity.com/resources/hiring-security-checklist/

Tuesday Dec 23, 2025
AI Broke Trust: Identity Has to Step Up in 2026
Tuesday Dec 23, 2025
Tuesday Dec 23, 2025
AI has supercharged phishing, deepfakes, and impersonation attacks—and 2025 proved that our trust systems aren’t built for this new reality. In this episode, Sherri and Matt break down the #1 change every security program needs in 2026: dramatically improving identity and authentication across the organization.
We explore how AI blurred the lines between legitimate and malicious communication, why authentication can no longer stop at the login screen, and where organizations must start adding verification into everyday workflows—from IT support calls to executive requests and financial approvals.
Plus, we discuss what “next-generation” user training looks like when employees can no longer rely on old phishing cues and must instead adopt identity-safety habits that AI can’t easily spoof.
If you want to strengthen your security program for the year ahead, this is the episode to watch.
Key Takeaways:
- Audit where internal conversations trigger action. Before adding controls, understand where trust actually matters—financial approvals, IT support, HR changes, executive requests—and treat those points as attack surfaces.
- Expand authentication into everyday workflows. Add verification to calls, video meetings, chats, approvals, and support interactions using known systems, codes, and out-of-band confirmation. Apply friction intentionally where mistakes are costly.
- Use verified communication features in collaboration platforms. Enable identity indicators, reporting features, and access restrictions in tools like Teams and Slack, and treat them as identity systems rather than just chat tools.
- Implement out-of-band push confirmation for high-risk requests. Authenticator-based confirmation defeats voice, video, and message impersonation because attackers rarely control multiple channels simultaneously.
- Move toward continuous identity validation. Identity should be reassessed as behavior and risk change, with step-up verification and session revocation for high-risk actions.
- Redesign training around identity safety. Teach employees how to verify people and requests, not just emails, and reward them for slowing down and confirming—even when it frustrates leadership.
Tune in weekly on Tuesdays at 6:30 am ET for more cybersecurity advice, and visit www.LMGsecurity.com if you need help with cybersecurity testing, advisory services, or training.
Resources:
CFO.com – Deepfake CFO Scam Costs Engineering Firm $25 Million
https://www.cfo.com/news/deepfake-cfo-hong-kong-25-million-fraud-cyber-crime/
Retool – MFA Isn’t MFA
https://retool.com/blog/mfa-isnt-mfa
Sophos MDR tracks two ransomware campaigns using “email bombing,” Microsoft Teams “vishing”
https://news.sophos.com/en-us/2025/01/21/sophos-mdr-tracks-two-ransomware-campaigns-using-email-bombing-microsoft-teams-vishing/
Wired – Doxers Posing as Cops Are Tricking Big Tech Firms Into Sharing People’s Private Data
https://www.wired.com/story/doxers-posing-as-cops-are-tricking-big-tech-firms-into-sharing-peoples-private-data/
LMG Security – 5 New-ish Microsoft Security Features & What They Reveal About Today’s Threats
https://www.lmgsecurity.com/5-new-ish-microsoft-security-features-what-they-reveal-about-todays-threats/

Tuesday Dec 16, 2025
The 5 New-ish Microsoft Security Features to Roll Out in 2026
Tuesday Dec 16, 2025
Tuesday Dec 16, 2025
Microsoft is rolling out a series of new-ish security features across Microsoft 365 in 2026 — and these updates are no accident. They’re direct responses to how attackers are exploiting collaboration tools like Teams, Slack, Zoom, and Google Chat. In this episode, Sherri and Matt break down the five features that matter most, why they’re happening now, and how every organization can benefit from these lessons, even if you’re not a Microsoft shop.
We explore the rise of impersonation attacks inside collaboration platforms, the security implications of AI copilots like Microsoft Copilot and Gemini, and why identity boundaries and data governance are quickly becoming foundational to modern security programs. You’ll come away with a clear understanding of what these new-ish Microsoft features signal about the evolving threat landscape — and practical steps you can take today to strengthen your security posture.
Key Takeaways
- Treat collaboration platforms as high-risk communication channels. Attackers increasingly use Teams, Slack, Zoom, and similar tools to impersonate coworkers or support staff, and organizations should help employees verify unexpected contacts just as rigorously as they verify email.
- Make it easy for users to report suspicious activity. Whether or not your platform offers a built-in reporting feature like Microsoft’s suspicious-call button, employees need a simple, well-understood way to escalate strange messages or calls inside collaboration tools.
- Monitor external collaboration for anomalies. Microsoft’s new anomaly report highlights a growing need across all ecosystems to watch for unexpected domains, unusual activity patterns, and impersonation attempts that occur through external collaboration channels.
- Classify and label sensitive data before enabling AI assistants. AI tools such as Copilot, Gemini, and Slack GPT inherit user permissions and may access far more information than intended if organizations haven’t established clear sensitivity labels and access boundaries.
- Enforce identity and tenant boundaries to limit data leakage. Features like Tenant Restrictions v2 demonstrate the importance of restricting where users can authenticate and ensuring that corporate data stays within approved environments.
- Update security training to reflect collaboration-era social engineering. Modern attacks frequently occur through chat messages, impersonated vendor accounts, malicious external domains, or voice/video calls, and training must evolve beyond traditional email-focused programs.
Please follow our podcast for the latest cybersecurity advice, and visit us at www.LMGsecurity.com if you need help with technical testing, cybersecurity consulting, and training!
Resources Mentioned
- Microsoft 365: Advancing Microsoft 365 – New Capabilities and Pricing Update: https://www.microsoft.com/en-us/microsoft-365/blog/2025/12/04/advancing-microsoft-365-new-capabilities-and-pricing-update/
- Microsoft 365 Roadmap – Suspicious Call Reporting (ID 536573): https://www.microsoft.com/en-us/microsoft-365/roadmap?id=536573
- Check Point Research: Exploiting Trust in Microsoft Teams: https://blog.checkpoint.com/research/exploiting-trust-in-collaboration-microsoft-teams-vulnerabilities-uncovered/
- Phishing Susceptibility Study (arXiv): https://arxiv.org/abs/2510.27298
- LMG Security Video: Email Bombing & IT Helpdesk Spoofing Attacks—How to Stop Them: https://www.lmgsecurity.com/videos/email-bombing-it-helpdesk-spoofing-attacks-how-to-stop-them/

Looking for more cybersecurity resources?
Check out our additional resources:
Blog: https://www.LMGsecurity.com/blog/
Top Controls Reports: https://www.LMGsecurity.com/top-security-controls-reports/
Videos: www.youtube.com/@LMGsecurity
