Track 1 platform and get the weekly governance digest. No credit card required.
This page describes what the document states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability may vary by jurisdiction. Methodology
This is Anthropic's consumer terms of service governing your use of Claude.ai and Claude Pro. By default, conversations you have with Claude may be used to train Anthropic's AI models; you can opt out of this in your account settings, but conversations you rate with thumbs up or down, or those flagged for safety review, may still be used for training even after opting out. US users who have disputes with Anthropic are required to resolve them through individual arbitration rather than court proceedings, waiving the right to participate in class action lawsuits.
This document governs individual consumer use of Claude.ai, Claude Pro, and associated Anthropic products and services, establishing a contractual relationship between users and Anthropic, PBC, and explicitly excluding API and commercial console use which falls under separate Commercial Terms. The agreement states that Anthropic may use user inputs and outputs (collectively 'Materials') to train its models unless users opt out via account settings, with the terms authorizing continued training use of Materials flagged for safety review or submitted as feedback regardless of opt-out status; the agreement also assigns Anthropic-held output rights to users subject to compliance, while users retain rights in their inputs. The feedback provision states that users who rate outputs grant Anthropic unconditional use of the associated conversation without compensation, and the terms authorize Anthropic to unilaterally modify the agreement with 30 days notice, with continued use constituting acceptance; the limitation of liability clause caps Anthropic's liability at the greater of fees paid in the prior 12 months or $100, which may face scrutiny under certain consumer protection regimes. The agreement engages GDPR and UK GDPR for EU and UK users, CCPA for California residents, COPPA indirectly through an 18-plus minimum age requirement, and the EU AI Act given the AI system context; mandatory arbitration with class action waiver for US users, governed by California law, creates distinct dispute resolution implications that may require evaluation under applicable consumer protection frameworks in the EU, UK, and other jurisdictions where mandatory arbitration clauses face enforceability constraints.
Institutional analysis available with Professional
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Start Professional free trialMonitoring
Anthropic has updated this document before.
Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
Professional Governance Intelligence
Need provision-level monitoring and regulatory mapping?
Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.
Start Professional free trialCross-platform context
See how other platforms handle Mandatory Arbitration and Class Action Waiver and similar clauses.
Compare across platforms →Anthropic is more transparent than most AI companies about data retention. Here's exactly what happens when you delete your data, and how t…
Governance Monitoring
Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.