This is Anthropic's official rules document — effective September 2025 — that sets out what you can and cannot do when using Claude and other Anthropic AI products, including apps built by third-party developers on Anthropic's technology. The single most important thing to know is that Anthropic actively monitors your usage and can block, throttle, or permanently terminate your access if you violate these rules — and can report you to law enforcement if CSAM or child exploitation content is detected. If you believe Claude has produced harmful or inaccurate output, you can report it directly at usersafety@anthropic.com or via the thumbs-down feedback button in the product.
Anthropic's Usage Policy (AUP), effective September 15, 2025, governs all users who submit inputs to Anthropic products and services — including via authorized resellers and passthrough access — and is structured around three tiers: Universal Usage Standards, High-Risk Use Case Requirements, and Additional Use Case Guidelines. The most significant obligations include absolute prohibitions on CSAM, weapons of mass destruction development, critical infrastructure attacks, and impersonation, alongside tiered requirements for high-risk use cases such as mental health support, legal advice, medical guidance, and political advertising that mandate specific safeguards including human oversight, crisis intervention resources, and transparency disclosures. Notably, the policy explicitly permits Anthropic to contract with governmental customers for tailored use restrictions that deviate from the standard AUP, creating a two-tier enforcement regime not commonly found in comparable AI platform policies, and reserves the right to both throttle or terminate access and unilaterally block or modify model outputs without prior notice. The policy engages the EU AI Act (high-risk AI system obligations), FTC Act Section 5 (unfair or deceptive practices, particularly around impersonation and deepfakes), COPPA (products serving minors), CSAM reporting obligations under 18 U.S.C. § 2258A (CyberTipline to NCMEC), and sector-specific laws in healthcare, financial services, and legal domains. Material compliance considerations include the agentic use guidelines requiring human-in-the-loop controls and minimal footprint principles, and the MCP server requirements creating downstream liability exposure for operators deploying third-party integrations.
🔒 Institutional analysis locked
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Upgrade to Professional — $149/mo1 change analyzed since monitoring began.
Cross-platform context
See how other platforms handle Account Termination Without Notice and similar clauses.
Compare across platforms →Anthropic is more transparent than most AI companies about data retention. Here's exactly what happens when you delete your data, and how t…