9 Total
3 High severity
6 Medium severity
0 Low severity
Summary

Anthropic's Usage Policy sets the rules for what you can and cannot do when using Claude and other Anthropic products, whether directly or through a third-party app built on Anthropic's technology. The single most important thing to know is that if Anthropic determines you have violated this policy, your access can be throttled, suspended, or terminated, and certain inputs may be blocked or outputs modified without prior notice. If you believe an AI output is inaccurate, biased, or harmful, you can report it directly to usersafety@anthropic.com or via the in-product thumbs-down feedback button.

Technical / Legal Breakdown

Anthropic's Usage Policy (effective September 15, 2025) governs all inputs submitted to Anthropic's products and services, including via authorized resellers and passthrough access, and is structured in three tiers: Universal Usage Standards applicable to all users, High-Risk Use Case Requirements for elevated-risk consumer-facing deployments, and Additional Use Case Guidelines covering chatbots, minors, agentic use, and Model Context Protocol servers. The agreement states that Anthropic's Safeguards Team will implement detection and monitoring to enforce the policy, and the terms authorize throttling, suspension, or termination of access for violations, as well as blocking or modifying model outputs. The policy's categorical prohibition on CSAM includes an explicit reporting commitment to authorities upon detection, and the children's safety provisions define a minor as any individual under 18 regardless of jurisdiction, which is operationally notable for international deployments; the governmental customer carve-out permitting tailored use restrictions where Anthropic judges contractual safeguards adequate is a relatively unusual provision that creates differentiated policy application across customer segments. The document engages COPPA, GDPR, EU AI Act, FTC Act consumer protection frameworks, and sector-specific considerations under HIPAA for health data use cases, with the agentic use and MCP server provisions likely requiring evaluation under emerging AI-specific regulatory guidance in the EU and potentially the UK; enforcement authority relevance varies by jurisdiction and specific use case.

Institutional Analysis

Institutional analysis available with Professional

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Start Professional free trial

1 important change detected

2 versions captured · Last updated: February 2026

What changed The Department of Defense designated Anthropic a supply chain risk after the company refused to remove two governance restrictions from its acceptable use policy: prohibitions on mass domestic surveillance and fully autonomous weapons systems.
Why this matters Defense contractors and federal agencies using Claude must find alternatives. Enterprise customers with defense-adjacent business face compliance risk.
View full change record →
High — 3 provisions
Medium — 6 provisions

Monitoring

Anthropic has updated this document before.

Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →

Professional Governance Intelligence

Need provision-level monitoring and regulatory mapping?

Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.

Start Professional free trial

Cross-platform context

See how other platforms handle Account Termination Without Notice and similar clauses.

Compare across platforms →

Mapped Governance Frameworks

California AB 2013 AI Training Data Transparency
US-CA
View official text ↗
CCPA/CPRA
California, USA
View official text ↗
DMCA
United States Federal
View official text ↗
DSA
European Union
View official text ↗
FTC Act Section 5
United States Federal
View official text ↗
GDPR
European Union
View official text ↗
UK GDPR
United Kingdom
View official text ↗

Related Analysis

Privacy · April 14, 2026
Deleted Claude Conversations Aren't Gone for 30 Days

Anthropic is more transparent than most AI companies about data retention. Here's exactly what happens when you delete your data, and how t…

Archival ProvenanceSource & Archival Record
Last Captured March 6, 2026 18:30 UTC
Capture Method Automated scheduled archival capture
Document ID CA-D-000013
Version ID CA-V-001638
SHA-256 6a6ebdde1850cdb2829e7e42df957bf9ceab8d0e59404bbcd6c69c3e9385b2c5
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Hash verified

Governance Monitoring

Monitor governance changes across the platforms you rely on.

Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.

Create free account Compare plans