This document sets the rules for using Anthropic's AI products like Claude.ai and Claude Pro. It covers how your conversations may be used to train AI models (you can opt out in settings), how subscriptions and billing work, and your rights if something goes wrong. Importantly, if you have a legal dispute with Anthropic, you generally must resolve it through private arbitration rather than a court, and you waive your right to join a class action lawsuit.
Anthropic's Consumer Terms of Service (effective October 8, 2025) governs individual use of Claude.ai, Claude Pro, and associated products. The agreement creates obligations relating to acceptable use, intellectual property, data handling, subscriptions, and dispute resolution. Notable provisions include a mandatory arbitration clause with a 30-day opt-out window, automatic subscription renewal with a 24-hour cancellation deadline, broad rights for Anthropic to use user-submitted materials for model training (with a limited opt-out), a class action waiver, and significant limitations on Anthropic's liability. The agreement is governed by California law and incorporates an Acceptable Use Policy by reference.
This agreement engages CCPA, GDPR (for EU users), and general FTC consumer protection frameworks. The mandatory arbitration clause and class action waiver create litigation risk mitigation for Anthro…
This agreement engages CCPA, GDPR (for EU users), and general FTC consumer protection frameworks. The mandatory arbitration clause and class action waiver create litigation risk mitigation for Anthropic but reduce collective redress options for consumers. Compliance teams should note the data use p…
Compliance intelligence locked
Regulatory exposure, material risk, and due diligence action items.
1 change analyzed since monitoring began.