This document sets the rules for using Anthropic's AI products like Claude.ai and Claude Pro. It covers how your conversations may be used to train AI models (you can opt out in settings), how subscriptions and billing work, and your rights if something goes wrong. Importantly, if you have a legal dispute with Anthropic, you generally must resolve it through private arbitration rather than a court, and you waive your right to join a class action lawsuit.
Anthropic's Consumer Terms of Service (effective October 8, 2025) governs individual use of Claude.ai, Claude Pro, and associated products. The agreement creates obligations relating to acceptable use, intellectual property, data handling, subscriptions, and dispute resolution. Notable provisions include a mandatory arbitration clause with a 30-day opt-out window, automatic subscription renewal with a 24-hour cancellation deadline, broad rights for Anthropic to use user-submitted materials for model training (with a limited opt-out), a class action waiver, and significant limitations on Anthropic's liability. The agreement is governed by California law and incorporates an Acceptable Use Policy by reference.
🔒 Institutional analysis locked
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Upgrade to Professional — $149/mo1 change analyzed since monitoring began.
This new provision imposes indemnification duties on users, requiring them to defend and hold Anthropic harmless from claims arising from user conduct or content, which increases user legal liability.
This new explicit provision clarifies that all payments are non-refundable, strengthening Anthropic's financial position and limiting user remedies for service issues or cancellations.
This new provision explicitly restricts service access to minors, establishing clear age-gating requirements and reducing Anthropic's liability for minor user interactions.
The removal of explicit workplace monitoring language may indicate either deletion of this practice or relocation to separate enterprise terms, potentially affecting employee privacy expectations.
The removal of explicit financial advice disclaimers may reduce Anthropic's protection against liability for users relying on Claude for investment or financial decisions.
The removal of the unilateral modification clause may constrain Anthropic's ability to unilaterally alter terms, potentially improving user protections but reducing operational flexibility.
The removal of explicit feedback licensing language may limit Anthropic's claimed rights to user feedback and rating data used for model improvement, potentially benefiting user privacy.
The current version splits the combined arbitration and class action waiver into two separate provisions (Mandatory Arbitration Clause and Class Action Waiver) with empty excerpts, suggesting the actual language was modified but specific text is not provided.
The current version renamed the provision from 'Training Data Use and Opt-Out Carve-Outs' to 'AI Model Training Data Use' with no excerpt provided, indicating the language may have been revised.
The provision was renamed from 'Subscription Auto-Renewal and No-Refund Policy' to 'Automatic Subscription Renewal' and the excerpt is now empty, suggesting textual modifications.
The severity was upgraded from 'medium' to 'high' and the excerpt is now empty, indicating the provision's legal weight increased and language was revised.
The excerpt is now empty in the current version, suggesting the IP assignment language was modified or clarified.
Cross-platform context
See how other platforms handle Account Termination and Suspension and similar clauses.
Compare across platforms →