If you give Anthropic any feedback, including rating a response with thumbs up or down, Anthropic can use that feedback any way it wants with no obligation to compensate you or limit how it uses the feedback.
This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
Rating a response is treated as providing feedback, which stores the full conversation and grants Anthropic unrestricted use rights over it, including for AI training purposes even if you have opted out of training.
Clicking thumbs up or thumbs down on a Claude response stores that full conversation as feedback and grants Anthropic unlimited rights to use it, including for AI model training, bypassing any training opt-out you have enabled. Users who want to limit their training data contribution should avoid using the rating feature.
How other platforms handle this
You agree, however, that (i) by submitting unsolicited ideas to Wealthfront or any of its employees or representatives, by any medium, including but not limited to email, written, or oral communication, you automatically forfeit your right to any intellectual property rights in such ideas; and (ii) ...
If you provide Writer with any feedback, suggestions, or other input regarding the Services ('Feedback'), you hereby assign to Writer all right, title, and interest in and to such Feedback, including all intellectual property rights therein. Writer may use such Feedback for any purpose without restr...
You may give a Redfin Company Feedback. You hereby assign to the applicable Redfin Company all of your right, title, and interest in and to the Feedback. To the extent applicable law does not permit assignment of the Feedback, you hereby grant the Redfin Companies a perpetual, irrevocable, worldwide...
Monitoring
Anthropic has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We appreciate feedback, including ideas and suggestions for improvement or rating an Output in response to an Input ('Feedback'). If you rate an Output in response to an Input—for example, by using the thumbs up/thumbs down icon—we will store the related conversation as part of your Feedback. You have no obligation to give us Feedback, but if you do, you agree that we may use the Feedback however we choose without any obligation or other payment to you.— Excerpt from Anthropic's Anthropic API Terms
REGULATORY LANDSCAPE: The broad 'use however we choose' grant in feedback may engage GDPR Article 6 lawful basis requirements if the feedback contains personal data, as unrestricted use may exceed the scope of the original consent. UK GDPR imposes equivalent requirements. The CCPA's right to opt out of sale or sharing of personal information may apply if feedback data is used in ways that constitute sharing under California law. The FTC's unfair or deceptive practices authority is relevant if the connection between the rating action and the training consent bypass is not adequately disclosed. GOVERNANCE EXPOSURE: Medium. The mechanism by which a simple UI interaction (thumbs up/down) triggers both conversation storage and an unlimited rights grant is operationally significant and may not be sufficiently salient to users in the flow of normal product use. The interaction with the training opt-out carve-out amplifies this concern. JURISDICTION FLAGS: EU and UK users may have GDPR-based rights to object to processing of personal data contained in feedback conversations for purposes beyond the original stated purpose. California users retain CCPA rights regarding use of personal information. The adequacy of disclosure at the point of the rating interaction, rather than only in the terms document, may be a focus of regulatory scrutiny. CONTRACT AND VENDOR IMPLICATIONS: Enterprise deployments should assess whether the feedback rights grant and associated training use creates data governance risks for employee conversations that happen to be rated. Product teams integrating Claude should consider whether UI elements that trigger feedback collection are adequately labeled. COMPLIANCE CONSIDERATIONS: Privacy teams should assess whether the feedback mechanism constitutes a lawful basis for processing under GDPR and whether additional in-product disclosure is required at the point of the rating interaction. Data mapping exercises should include feedback-linked conversations as a distinct data category with different processing permissions than non-feedback conversations. EU Data Protection Officers should evaluate whether the unrestricted use grant satisfies purpose limitation requirements under GDPR Article 5.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
Rating a response is treated as providing feedback, which stores the full conversation and grants Anthropic unrestricted use rights over it, including for AI training purposes even if you have opted out of training.
Clicking thumbs up or thumbs down on a Claude response stores that full conversation as feedback and grants Anthropic unlimited rights to use it, including for AI model training, bypassing any training opt-out you have enabled. Users who want to limit their training data contribution should avoid using the rating feature.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.