
3.2K
Downloads
62
Episodes
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live! Our next session will be announced soon.
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live! Our next session will be announced soon.
Episodes

10 hours ago
Is Anthropic a Pentagon “Supply Chain Risk”?
10 hours ago
10 hours ago
Anthropic has been labeled a “Supply-Chain Risk to National Security” after refusing two uses of its models: mass surveillance of Americans and lethal autonomous warfare without human oversight. But is Anthropic really a supply-chain risk, and how does this designation affect businesses that use Claude? In this episode, Sherri Davidoff and Matt Durrin unpack the timeline behind the Pentagon’s designation, what Anthropic claims is actually driving the conflict, and what’s known (and not known) about any underlying technical risk. They compare the situation to Kaspersky—where the supply-chain concern centered on privileged security software, foreign-state leverage, and update-channel risk—then bring it back to the enterprise questions that matter: vendor dependency, continuity planning, and what changes when an AI provider becomes politically or contractually constrained.
Key Takeaways for Security Leaders
1. Treat AI vendors as critical dependencies, not just tools.
If a frontier AI provider is embedded in coding, search, documentation, analytics, or agentic workflows, a legal or procurement shock can become an operational disruption. Track where you are dependent on a single model provider and where that dependency would hurt most.
2. For your highest-value uses, define fallback workflows ahead of time.
You may not be able to replace every provider quickly, but you should know what happens if a key AI service becomes unavailable, restricted, or no longer acceptable for regulatory or contractual reasons. For the workflows that matter most, decide in advance how the work gets done without that vendor.
3. Keep guardrails in place when AI is involved in critical changes.
AI can speed up engineering, operations, and decision-making, but that speed can create new failure modes if approvals, testing, rollback, and human review get weakened. Be especially careful in environments where AI-assisted or agentic systems can make infrastructure, code, security, or configuration changes.
4. Inventory where AI has real privilege.
The risk is much higher when AI can execute code, access sensitive data, approve actions, or trigger automations. Focus your review on those integrations first, because those are the places where vendor problems or internal AI mistakes are most likely to turn into real incidents.
5. Make your teams define the actual vendor risk they are worried about.
A vendor can create very different kinds of risk: technical compromise risk, foreign-control risk, continuity risk, or procurement/governance risk. Forcing that distinction helps teams respond more clearly and avoid treating every controversy like a hidden software compromise.
Resources
1. Statement from Dario Amodei on our discussions with the Department of War (Anthropic, Feb. 26, 2026) https://www.anthropic.com/news/statement-department-of-war
2. Where things stand with the Department of War (Anthropic, Mar. 5, 2026) https://www.anthropic.com/news/where-stand-department-war
3. Anthropic v. U.S. Department of War et al. — Complaint for Declaratory and Injunctive Relief (N.D. Cal., filed Mar. 9, 2026) (court filing PDF) https://cand.uscourts.gov/cases-e-filing/cases/326-cv-01996/anthropic-pbc-v-us-department-war-et-al
4. BOD 17-01: Removal of Kaspersky-branded Products (CISA/DHS, Sept. 13, 2017) https://www.dhs.gov/archive/news/2017/09/13/dhs-statement-issuance-binding-operational-directive-17-01
5. Amazon holds engineering meeting following AI-related outages (Financial Times, Mar. 2026) https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f771de

No comments yet. Be the first to say something!