Pentagon Labels AI Firm Anthropic a Supply-Chain Risk in Escalating Clash

The U.S. Pentagon has formally designated artificial intelligence company Anthropic as a “supply-chain risk” to national security, effective immediately.

A senior defense official confirmed the move to Reuters on Thursday, barring government contractors from using Anthropic’s technology, including its Claude AI model, in U.S. military work. The decision follows weeks of failed negotiations over safeguards on military AI applications.

Dispute Over AI Safeguards Intensifies The core conflict stems from Anthropic’s strict policies prohibiting Claude’s use in autonomous weapons or mass surveillance.

Pentagon officials argued these restrictions limit necessary flexibility for lawful military operations. A source familiar with the matter revealed Claude has supported U.S. military efforts in Iran, including intelligence analysis and operational planning amid ongoing strikes.

Unprecedented Action and Anthropic’s Response This marks the first time a U.S. company has received such a designation, typically reserved for foreign entities like China’s Huawei.

Anthropic CEO Dario Amodei described the label as having a “narrow scope,” applying only to direct use in Department of War contracts—not broader commercial activity by defense-linked customers.

He vowed to challenge the designation in court and noted ongoing talks about potential military collaboration without removing key safeguards. Microsoft, an investor, affirmed Claude remains available to non-defense users via its platforms.

The move highlights growing tensions between AI safety priorities and national security demands under the Trump administration, which has renamed the Defense Department the “Department of War.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top