Track 1 platform and get the weekly governance digest. No credit card required.
This page describes what the document states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability may vary by jurisdiction. Methodology
Anthropic's Usage Policy sets the rules for what you can and cannot do when using Claude and other Anthropic products, whether directly or through a third-party app built on Anthropic's technology. The single most important thing to know is that if Anthropic determines you have violated this policy, your access can be throttled, suspended, or terminated, and certain inputs may be blocked or outputs modified without prior notice. If you believe an AI output is inaccurate, biased, or harmful, you can report it directly to usersafety@anthropic.com or via the in-product thumbs-down feedback button.
Anthropic's Usage Policy (effective September 15, 2025) governs all inputs submitted to Anthropic's products and services, including via authorized resellers and passthrough access, and is structured in three tiers: Universal Usage Standards applicable to all users, High-Risk Use Case Requirements for elevated-risk consumer-facing deployments, and Additional Use Case Guidelines covering chatbots, minors, agentic use, and Model Context Protocol servers. The agreement states that Anthropic's Safeguards Team will implement detection and monitoring to enforce the policy, and the terms authorize throttling, suspension, or termination of access for violations, as well as blocking or modifying model outputs. The policy's categorical prohibition on CSAM includes an explicit reporting commitment to authorities upon detection, and the children's safety provisions define a minor as any individual under 18 regardless of jurisdiction, which is operationally notable for international deployments; the governmental customer carve-out permitting tailored use restrictions where Anthropic judges contractual safeguards adequate is a relatively unusual provision that creates differentiated policy application across customer segments. The document engages COPPA, GDPR, EU AI Act, FTC Act consumer protection frameworks, and sector-specific considerations under HIPAA for health data use cases, with the agentic use and MCP server provisions likely requiring evaluation under emerging AI-specific regulatory guidance in the EU and potentially the UK; enforcement authority relevance varies by jurisdiction and specific use case.
Institutional analysis available with Professional
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Start Professional free trial1 important change detected
2 versions captured · Last updated: February 2026
Monitoring
Anthropic has updated this document before.
Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
Professional Governance Intelligence
Need provision-level monitoring and regulatory mapping?
Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.
Start Professional free trialCross-platform context
See how other platforms handle Account Termination Without Notice and similar clauses.
Compare across platforms →Anthropic is more transparent than most AI companies about data retention. Here's exactly what happens when you delete your data, and how t…
Governance Monitoring
Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.