This document sets the rules for using Anthropic's AI products like Claude.ai and Claude Pro. It covers how your conversations may be used to train AI models (you can opt out in settings), how subscriptions and billing work, and your rights if something goes wrong. Importantly, if you have a legal dispute with Anthropic, you generally must resolve it through private arbitration rather than a court, and you waive your right to join a class action lawsuit.
Anthropic's Consumer Terms of Service (effective October 8, 2025) governs individual use of Claude.ai, Claude Pro, and associated products. The agreement creates obligations relating to acceptable use, intellectual property, data handling, subscriptions, and dispute resolution. Notable provisions include a mandatory arbitration clause with a 30-day opt-out window, automatic subscription renewal with a 24-hour cancellation deadline, broad rights for Anthropic to use user-submitted materials for model training (with a limited opt-out), a class action waiver, and significant limitations on Anthropic's liability. The agreement is governed by California law and incorporates an Acceptable Use Policy by reference.
Compliance intelligence locked
Regulatory exposure, material risk, and due diligence action items. Available on Professional.
Upgrade to Professional — $149/moWant alerts when this document changes? Watcher — $9.99/mo
1 change analyzed since monitoring began.