If Anthropic causes you harm, the most you can recover is the amount you paid in the 12 months before the problem occurred, or $100 if you paid nothing. You also cannot claim compensation for lost data, lost business, or other indirect harm.
This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This cap means that even significant harm caused by AI errors or service failures may result in very limited financial recovery, particularly for free-tier users who are capped at $100.
Interpretive note: Enforceability varies by jurisdiction; EU and UK consumer protection law may limit the practical effect of this clause for users in those regions.
Free-tier Claude.ai users are limited to $100 in recoverable damages regardless of the actual harm suffered. Paid subscribers are limited to 12 months of fees paid, which may be substantially less than actual losses in cases involving data loss, business disruption, or reliance on inaccurate AI outputs.
How other platforms handle this
TO THE MAXIMUM EXTENT PERMITTED BY LAW, NEITHER WHATNOT NOR ITS SERVICE PROVIDERS INVOLVED IN CREATING, PRODUCING, OR DELIVERING THE SERVICES WILL BE LIABLE FOR ANY INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES, OR DAMAGES FOR LOST PROFITS, LOST REVENUES, LOST SAVINGS, LOST BUSINESS OPPORT...
In no event will either party's aggregate liability arising out of or related to this Agreement exceed the total fees paid or payable by Customer in the twelve (12) months preceding the claim. In no event will either party be liable for any indirect, incidental, special, consequential, or punitive d...
BY AGREEING TO THE TERMS OF THIS AGREEMENT, YOU ARE ALSO AGREEING TO CONTRACTUAL TERMS THAT WILL LIMIT SOME OF YOUR LEGAL RIGHTS, INCLUDING A DISCLAIMER OF WARRANTY, AN EXCLUSION OF CERTAIN KINDS OF DAMAGES, AND A LIMITATION OF LIABILITY.
Monitoring
Anthropic has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"To the maximum extent permitted by law, Anthropic's total cumulative liability to you or any third party under these Terms, from all causes of action and all theories of liability, will be limited to, and will not exceed, the fees you actually paid us during the 12 months before the claim arose (or $100, if you have not paid us any fees during such period). In no event will Anthropic be liable for any indirect, incidental, punitive, consequential, or exemplary damages of any kind, including damages arising from loss of data or loss of business, arising out of these Terms or relating to them, our Services or any associated software, even if we have been notified of the possibility of such damages.— Excerpt from Anthropic's Anthropic API Terms
REGULATORY LANDSCAPE: EU and UK consumer protection law generally prohibits limitation of liability clauses that exclude or restrict liability for death, personal injury, or fraud caused by the service provider. The EU Unfair Contract Terms Directive may render limitations of liability in consumer contracts unenforceable where they create a significant imbalance to the detriment of the consumer. The FTC may scrutinize limitation of liability clauses that are buried or inadequately disclosed under unfair or deceptive practices standards. Australian Consumer Law provides non-excludable consumer guarantees that may limit the practical effect of this clause for Australian users. GOVERNANCE EXPOSURE: Medium. The $100 cap for non-paying users is commercially standard for consumer AI products but creates a significant gap between potential harm (particularly from reliance on inaccurate outputs in consequential decisions) and recoverable damages. The exclusion of consequential damages, including loss of data, is particularly significant given that AI outputs may be used in business or creative workflows. JURISDICTION FLAGS: EU and UK jurisdictions may find portions of this clause unenforceable under consumer protection law. Australia's non-excludable statutory guarantees create similar exposure. California's consumer protection framework may interact with this limitation in class action contexts. The agreement's statement that the limitation applies 'to the maximum extent permitted by law' is a standard carve-out that acknowledges jurisdictional variability. CONTRACT AND VENDOR IMPLICATIONS: Commercial users relying on Claude outputs for business-critical decisions should assess whether the $100 or 12-month fee cap is adequate given the risk profile of their use case. Indemnification provisions in downstream contracts should account for the limited recourse available against Anthropic. Cyber insurance policies should be evaluated for coverage gaps created by this limitation. COMPLIANCE CONSIDERATIONS: Legal teams advising on Claude integration should disclose the liability limitation to internal stakeholders and assess whether the organization's risk tolerance is compatible with the cap. For EU and UK deployments, legal review should assess enforceability of the limitation under applicable consumer protection frameworks. Contract templates incorporating Claude-generated content should include appropriate disclaimers reflecting the limited liability structure.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This cap means that even significant harm caused by AI errors or service failures may result in very limited financial recovery, particularly for free-tier users who are capped at $100.
Free-tier Claude.ai users are limited to $100 in recoverable damages regardless of the actual harm suffered. Paid subscribers are limited to 12 months of fees paid, which may be substantially less than actual losses in cases involving data loss, business disruption, or reliance on inaccurate AI outputs.
ConductAtlas has identified this type of provision across 228 platforms. See the full comparison.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.