10 Total
7 High severity
2 Medium severity
1 Low severity
Summary

This is Anthropic's official rules document — effective September 2025 — that sets out what you can and cannot do when using Claude and other Anthropic AI products, including apps built by third-party developers on Anthropic's technology. The single most important thing to know is that Anthropic actively monitors your usage and can block, throttle, or permanently terminate your access if you violate these rules — and can report you to law enforcement if CSAM or child exploitation content is detected. If you believe Claude has produced harmful or inaccurate output, you can report it directly at usersafety@anthropic.com or via the thumbs-down feedback button in the product.

Technical Summary

Anthropic's Usage Policy (AUP), effective September 15, 2025, governs all users who submit inputs to Anthropic products and services — including via authorized resellers and passthrough access — and is structured around three tiers: Universal Usage Standards, High-Risk Use Case Requirements, and Additional Use Case Guidelines. The most significant obligations include absolute prohibitions on CSAM, weapons of mass destruction development, critical infrastructure attacks, and impersonation, alongside tiered requirements for high-risk use cases such as mental health support, legal advice, medical guidance, and political advertising that mandate specific safeguards including human oversight, crisis intervention resources, and transparency disclosures. Notably, the policy explicitly permits Anthropic to contract with governmental customers for tailored use restrictions that deviate from the standard AUP, creating a two-tier enforcement regime not commonly found in comparable AI platform policies, and reserves the right to both throttle or terminate access and unilaterally block or modify model outputs without prior notice. The policy engages the EU AI Act (high-risk AI system obligations), FTC Act Section 5 (unfair or deceptive practices, particularly around impersonation and deepfakes), COPPA (products serving minors), CSAM reporting obligations under 18 U.S.C. § 2258A (CyberTipline to NCMEC), and sector-specific laws in healthcare, financial services, and legal domains. Material compliance considerations include the agentic use guidelines requiring human-in-the-loop controls and minimal footprint principles, and the MCP server requirements creating downstream liability exposure for operators deploying third-party integrations.

Evidence Provenance
Captured March 6, 2026 18:30 UTC
Document ID CA-D-000013
Version ID CA-V-000043
Wayback Machine View archived versions →
SHA-256 6a6ebdde1850cdb2829e7e42df957bf9ceab8d0e59404bbcd6c69c3e9385b2c5
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Cryptographically signed
Institutional Analysis

🔒 Institutional analysis locked

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Upgrade to Professional — $149/mo
Change Timeline
View full version history (0 captures) →
Analyzed Changes

1 change analyzed since monitoring began.

What changed Anthropic updated their Anthropic Usage Policy on March 06, 2026. Change detected: 1 sentence(s) modified. Document contained 28 sentences after update.
Consumer impact Anthropic made a cosmetic change to the heading of their Acceptable Use Policy on March 6, 2026, removing a redundant title line from the document. The actual rules, restrictions, and user obligations in the policy remain completely unchanged. This change has no practical effect on how users can or cannot use Anthropic's products.
Why it matters This change is purely cosmetic and does not affect any user rights or obligations under Anthropic's Acceptable Use Policy. It is noted here for completeness and document tracking purposes only.

Recent Clause-Level Changes Mar 6, 2026

8 provisions unchanged.

View full change record →
High Severity — 7 provisions
Medium Severity — 2 provisions
Low Severity — 1 provision

Cross-platform context

See how other platforms handle Account Termination Without Notice and similar clauses.

Compare across platforms →

Applicable Regulations

EU AI Act
European Union
BIPA
Illinois, USA
CCPA/CPRA
California, USA
CFAA
United States Federal
CAN-SPAM
United States Federal
DMCA
United States Federal
DSA
European Union
GDPR
European Union
UK GDPR
United Kingdom

Related Analysis

Privacy · April 14, 2026
Deleted Claude Conversations Aren't Gone for 30 Days

Anthropic is more transparent than most AI companies about data retention. Here's exactly what happens when you delete your data, and how t…