CA-C-001866 Top 5%
Anthropic — Anthropic API Usage Policy
Entity
Date detected
February 27, 2026
Severity
Direction
Negative
Category
Modified
Share 𝕏 Share in Share 🔒 PDF
Watch Anthropic Get alerts when this policy changes.
Watch — Free

Overview

The Department of Defense designated Anthropic a supply chain risk after the company refused to remove two governance restrictions from its acceptable use policy: prohibitions on mass domestic surveillance and fully autonomous weapons systems.

HIGH

Timeline

Editorial

Jul 2025: Anthropic and Pentagon signed contract making Claude the first frontier AI model on classified networks

Feb 16, 2026: Reports emerged that Hegseth was close to designating Anthropic a supply chain risk

Feb 24, 2026: Hegseth gave Anthropic a Friday deadline to open Claude for unrestricted military use or face Defense Production Act invocation

Feb 24, 2026: Same day — Anthropic published RSP 3.0, replacing hard safety commitments with nonbinding Frontier Safety Roadmaps

Feb 27, 2026: Hegseth designated Anthropic a supply chain risk under 10 USC 3252 — first time ever used against an American company

Feb 27, 2026: Trump ordered all federal agencies to cease using Claude with 6-month DOD wind-down

Feb 27, 2026: Anthropic issued defiant public statement refusing to comply

Feb 28, 2026: Claude surged to number one on Apple App Store. Hundreds of Google and OpenAI employees signed petition supporting Anthropic

Mar 9, 2026: Anthropic filed dual lawsuits in N.D. California and D.C. Circuit challenging the designation

Mar 10, 2026: Microsoft filed corporate amicus brief supporting Anthropic. OpenAI and Google DeepMind researchers filed personal amicus briefs

Mar 26, 2026: Judge Rita Lin blocked the designation, ruling it violated First Amendment and due process rights

Primary source →

Key Findings

1.

Acceptable use policies have real operational consequences — two specific provisions cost Anthropic over $200M in government contracts.

2.

The supply chain risk statute (10 USC 3252) was used against a domestic company for the first time, creating new legal precedent for AI governance.

3.

A federal judge ruled the designation violated First Amendment rights, establishing that maintaining AI safety guardrails is protected speech.

4.

The dispute revealed that OpenAI accepted the same Pentagon contract while claiming identical guardrails — raising questions about consistent enforcement.

5.

Enterprise customers face new compliance risk: defense contractors must now evaluate whether AI vendor relationships create supply chain exposure.

Consumer Impact

Defense contractors and federal agencies using Claude must find alternatives. Enterprise customers with defense-adjacent business face compliance risk.

Governance Analysis

This is the first time the U.S. government has used a supply chain risk statute designed for foreign adversaries against an American company. If you use Claude for work — especially in defense, government contracting, or enterprise settings — this designation may affect whether your organization can maintain its Anthropic relationship. The dispute also reveals that the two specific provisions Anthropic refused to remove — prohibitions on mass surveillance and autonomous weapons — are governance commitments the company is willing to lose hundreds of millions of dollars to enforce.

Governance Provisions Involved
HIGH CSAM Prohibition and Mandatory Reporting

This is one of the most serious prohibitions in the policy — violations constitute federal crimes and Anthropic has committed to active detection and…

HIGH High-Risk Use Case Requirements for Medical, Legal, and Financial Advice

This tiered compliance structure creates differential obligations for operators in regulated industries, meaning businesses in healthcare, law, and f…

HIGH Products Serving Minors — Enhanced Safety Requirements

Any app or platform using Claude that is designed for or likely to attract minor users must implement additional protections — failure to do so expos…

HIGH Prohibition on Compromising Critical Infrastructure

Attacks on critical infrastructure constitute federal crimes and are among the most serious AI misuse scenarios — this prohibition aligns with nation…

HIGH Prohibition on Privacy and Identity Rights Violations

The explicit prohibition on AI impersonation of humans — and the inclusion of neural data alongside biometric data — reflects emerging regulatory sta…

HIGH Prohibition on Psychological Manipulation and Deceptive Content

This provision prohibits some of the most harmful uses of generative AI — including AI-powered influence operations and large-scale disinformation ca…

HIGH Prohibition on Weapons of Mass Destruction Development

This is one of the most absolute prohibitions in the document — there are no carve-outs, exceptions, or research exemptions stated for CBRN weapons d…

HIGH Universal Prohibition on Weapons Development

This prohibition directly prevents misuse of AI for mass-casualty weapon development and exposes violators to serious federal criminal liability.

HIGH Weapons of Mass Destruction Prohibition

This is one of the absolute prohibitions in the policy, covering not just direct weapon design but also precursor development, weaponization processe…

Connected Regulations
Trump Executive Order on AI Policy Framework US · Effective Dec 11, 2025
This governance situation is ongoing. Get alerted on future developments. Watch Anthropic — Free

This is an editorial governance record documenting a significant policy event based on publicly reported information. It is not generated from an automated document diff. Analysis reflects reported actions and their governance implications. Methodology

Evidence Verification

✓ Verified
Current Version
4e7cc5787a7f0e9a742ee5439b657f5e39b473df9f183776b8d05da531011637
March 6, 2026 18:27 UTC
✓ Verified
Change Detected
February 27, 2026 00:00 UTC
Analysis Methodology
✓ Verified
Source Document
https://www.anthropic.com/legal/aup
Citation Record
Entity: Anthropic
Document: Anthropic API Usage Policy
Record ID: CA-C-001866
Captured: 2026-02-27 00:00:00 UTC
URL: https://conductatlas.com/change/2026-02-27-anthropic-anthropic-api-usage-policy-1866/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.

Document Context

Version history → Policy drift analysis → Document page →
Document
Anthropic API Usage Policy
Entity
Anthropic
Captured
March 6, 2026
Source URL
https://www.anthropic.com/legal/aup
Related Analysis
Privacy · April 14, 2026
Deleted Claude Conversations Aren't Gone for 30 Days

Anthropic is more transparent than most AI companies about data retention. Here's exactly what happens when you delete your data, and how t…

Track Anthropic policy changes

Get alerted when this policy changes again — including what changed and why it matters.

Prefer a weekly summary instead?

Get the biggest policy changes across 320+ platforms every Sunday.