The Department of Defense designated Anthropic a supply chain risk after the company refused to remove two governance restrictions from its acceptable use policy: prohibitions on mass domestic surveillance and fully autonomous weapons systems.
Jul 2025: Anthropic and Pentagon signed contract making Claude the first frontier AI model on classified networks
Feb 16, 2026: Reports emerged that Hegseth was close to designating Anthropic a supply chain risk
Feb 24, 2026: Hegseth gave Anthropic a Friday deadline to open Claude for unrestricted military use or face Defense Production Act invocation
Feb 24, 2026: Same day — Anthropic published RSP 3.0, replacing hard safety commitments with nonbinding Frontier Safety Roadmaps
Feb 27, 2026: Hegseth designated Anthropic a supply chain risk under 10 USC 3252 — first time ever used against an American company
Feb 27, 2026: Trump ordered all federal agencies to cease using Claude with 6-month DOD wind-down
Feb 27, 2026: Anthropic issued defiant public statement refusing to comply
Feb 28, 2026: Claude surged to number one on Apple App Store. Hundreds of Google and OpenAI employees signed petition supporting Anthropic
Mar 9, 2026: Anthropic filed dual lawsuits in N.D. California and D.C. Circuit challenging the designation
Mar 10, 2026: Microsoft filed corporate amicus brief supporting Anthropic. OpenAI and Google DeepMind researchers filed personal amicus briefs
Mar 26, 2026: Judge Rita Lin blocked the designation, ruling it violated First Amendment and due process rights
Acceptable use policies have real operational consequences — two specific provisions cost Anthropic over $200M in government contracts.
The supply chain risk statute (10 USC 3252) was used against a domestic company for the first time, creating new legal precedent for AI governance.
A federal judge ruled the designation violated First Amendment rights, establishing that maintaining AI safety guardrails is protected speech.
The dispute revealed that OpenAI accepted the same Pentagon contract while claiming identical guardrails — raising questions about consistent enforcement.
Enterprise customers face new compliance risk: defense contractors must now evaluate whether AI vendor relationships create supply chain exposure.
Defense contractors and federal agencies using Claude must find alternatives. Enterprise customers with defense-adjacent business face compliance risk.
This is the first time the U.S. government has used a supply chain risk statute designed for foreign adversaries against an American company. If you use Claude for work — especially in defense, government contracting, or enterprise settings — this designation may affect whether your organization can maintain its Anthropic relationship. The dispute also reveals that the two specific provisions Anthropic refused to remove — prohibitions on mass surveillance and autonomous weapons — are governance commitments the company is willing to lose hundreds of millions of dollars to enforce.
This is one of the most serious prohibitions in the policy — violations constitute federal crimes and Anthropic has committed to active detection and…
This tiered compliance structure creates differential obligations for operators in regulated industries, meaning businesses in healthcare, law, and f…
Any app or platform using Claude that is designed for or likely to attract minor users must implement additional protections — failure to do so expos…
Attacks on critical infrastructure constitute federal crimes and are among the most serious AI misuse scenarios — this prohibition aligns with nation…
The explicit prohibition on AI impersonation of humans — and the inclusion of neural data alongside biometric data — reflects emerging regulatory sta…
This provision prohibits some of the most harmful uses of generative AI — including AI-powered influence operations and large-scale disinformation ca…
This is one of the most absolute prohibitions in the document — there are no carve-outs, exceptions, or research exemptions stated for CBRN weapons d…
This prohibition directly prevents misuse of AI for mass-casualty weapon development and exposes violators to serious federal criminal liability.
This is one of the absolute prohibitions in the policy, covering not just direct weapon design but also precursor development, weaponization processe…
This is an editorial governance record documenting a significant policy event based on publicly reported information. It is not generated from an automated document diff. Analysis reflects reported actions and their governance implications. Methodology
Anthropic is more transparent than most AI companies about data retention. Here's exactly what happens when you delete your data, and how t…
Get alerted when this policy changes again — including what changed and why it matters.
Prefer a weekly summary instead?
Get the biggest policy changes across 320+ platforms every Sunday.