If you rate a Claude response using the thumbs up or thumbs down button, Anthropic stores the full conversation as feedback and can use it in any way they choose, including for AI training, with no compensation to you.
This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
Rating any Claude output triggers storage of the entire associated conversation as feedback and grants Anthropic unrestricted use rights over that conversation, including for model training, regardless of whether you have opted out of training in your account settings.
Every thumbs up or thumbs down rating submitted by a user causes the full conversation to be stored as feedback and made available for unrestricted use by Anthropic, including model training. This occurs independently of the general model training opt-out setting.
How other platforms handle this
You agree, however, that (i) by submitting unsolicited ideas to Wealthfront or any of its employees or representatives, by any medium, including but not limited to email, written, or oral communication, you automatically forfeit your right to any intellectual property rights in such ideas; and (ii) ...
If you provide Writer with any feedback, suggestions, or other input regarding the Services ('Feedback'), you hereby assign to Writer all right, title, and interest in and to such Feedback, including all intellectual property rights therein. Writer may use such Feedback for any purpose without restr...
You may give a Redfin Company Feedback. You hereby assign to the applicable Redfin Company all of your right, title, and interest in and to the Feedback. To the extent applicable law does not permit assignment of the Feedback, you hereby grant the Redfin Companies a perpetual, irrevocable, worldwide...
Monitoring
Anthropic has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We appreciate feedback, including ideas and suggestions for improvement or rating an Output in response to an Input ("Feedback"). If you rate an Output in response to an Input—for example, by using the thumbs up/thumbs down icon—we will store the related conversation as part of your Feedback. You have no obligation to give us Feedback, but if you do, you agree that we may use the Feedback however we choose without any obligation or other payment to you.— Excerpt from Anthropic's Anthropic Consumer Terms
(1) REGULATORY LANDSCAPE: This provision implicates GDPR Articles 6 and 7 regarding the legal basis and conditions for processing personal data collected through feedback interactions. The broad 'however we choose' formulation may require evaluation under GDPR's purpose limitation principle (Article 5(1)(b)), which requires that data be collected for specified, explicit, and legitimate purposes. CCPA is relevant regarding disclosure of how feedback data is used. EU AI Act training data transparency requirements may also engage this provision. (2) GOVERNANCE EXPOSURE: Medium. The provision grants Anthropic rights to use feedback-associated conversations without restriction or compensation. The scope of 'however we choose' is broad and may present challenges under GDPR's purpose limitation and data minimization principles if challenged by EU or UK users. (3) JURISDICTION FLAGS: EU and UK users retain data subject rights under GDPR and UK GDPR (including erasure and objection rights) that may limit the practical effect of this unrestricted use grant. California residents retain CCPA rights regarding knowledge of and control over how their personal information is used. The provision should be evaluated against applicable data protection law in each operating jurisdiction. (4) CONTRACT AND VENDOR IMPLICATIONS: Organizations deploying Claude.ai for employees should note that employee feedback interactions may be stored and used without restriction under these terms, which may conflict with internal data handling policies or employment agreements. This warrants review in the context of GDPR Article 88 employment data provisions. (5) COMPLIANCE CONSIDERATIONS: Privacy notice and in-product disclosure reviews should confirm that users are clearly informed at the point of using the feedback interface that doing so triggers conversation storage and unrestricted use rights. Data mapping exercises should account for feedback-derived data as a separate category with distinct retention and use characteristics.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
Rating any Claude output triggers storage of the entire associated conversation as feedback and grants Anthropic unrestricted use rights over that conversation, including for model training, regardless of whether you have opted out of training in your account settings.
Every thumbs up or thumbs down rating submitted by a user causes the full conversation to be stored as feedback and made available for unrestricted use by Anthropic, including model training. This occurs independently of the general model training opt-out setting.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.