Track 1 platform and get the weekly governance digest. No credit card required.
This page describes what the document states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability may vary by jurisdiction. Methodology
This is Anthropic's terms of service for individual users of Claude.ai and Claude Pro, covering how you can use the AI assistant, what Anthropic can do with your conversations, and how subscriptions and billing work. The most important thing to know is that Anthropic may use your conversation inputs and outputs to train its AI models by default, and while you can opt out in account settings, the opt-out does not apply if you give feedback (thumbs up or down) on a response or if your content is flagged for safety review. If you are in the US, you should know that these terms include a mandatory arbitration clause and class action waiver, and you have 30 days from account creation to opt out of arbitration by emailing the designated address.
This document governs consumer use of Claude.ai, Claude Pro, and associated Anthropic individual-facing products, establishing a contract between end users and Anthropic, PBC that explicitly excludes API and commercial console use (governed separately by Commercial Terms of Service). The agreement states that Anthropic may use user inputs and outputs (collectively 'Materials') to train AI models unless users opt out via account settings, though the terms authorize continued use of flagged safety-review content and user-provided feedback for training regardless of opt-out status; the agreement also assigns Anthropic-generated output rights to users 'to the extent permitted by applicable law,' and reserves broad rights to modify, suspend, or terminate services with limited notice obligations. The opt-out carve-outs for safety review and feedback-linked training data represent a notable operational distinction, as does the explicit financial advice prohibition and the automatic renewal structure requiring cancellation at least 24 hours before the renewal date; the agreement asserts broad indemnification obligations from users and limits Anthropic's liability to direct damages not exceeding fees paid in the prior 12 months, which may face scrutiny under EU and UK consumer protection frameworks that restrict liability exclusions for consumer contracts. The terms engage GDPR and UK GDPR for EU and UK users, CCPA for California residents, COPPA for age-gating obligations, and the EU AI Act given deployment of large language model systems; the mandatory arbitration clause with class action waiver, applicable to US users, and the 30-day opt-out window are material compliance considerations, particularly given growing US state-level arbitration scrutiny and California public policy limitations on class action waivers.
Institutional analysis available with Professional
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Start Professional free trialMonitoring
Anthropic has updated this document before.
Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
Professional Governance Intelligence
Need provision-level monitoring and regulatory mapping?
Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.
Start Professional free trialCross-platform context
See how other platforms handle Employer Access via Business Domains and similar clauses.
Compare across platforms →Anthropic is more transparent than most AI companies about data retention. Here's exactly what happens when you delete your data, and how t…
Governance Monitoring
Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.