Anthropic can use your Claude conversations to train its AI by default, and you can turn this off in settings — but even if you do, clicking thumbs up or down on a response, or having any message flagged for safety, means that content still gets used for training.
If you opt out of model training but ever rate a Claude response or have a conversation flagged for safety review, your conversation data is still used to train Anthropic's AI — limiting the practical effectiveness of the opt-out for most active users.
Cross-platform context
See how other platforms handle Training Data Use and Opt-Out Carve-Outs and similar clauses.
Compare across platforms →The opt-out is partially illusory: two common and difficult-to-avoid actions (giving feedback and having content safety-reviewed) permanently override your opt-out preference, meaning many users who believe they have opted out may still be contributing training data.
(1) REGULATORY FRAMEWORK: This provision implicates GDPR Art. 6(1)(a) (consent as lawful basis), Art. 7 (conditions for consent including the right to withdraw), and Art. 5(1)(b) (purpose limitation) as enforced by EU/EEA Data Protection Authorities. It also engages CCPA §1798.120 (right to opt out of sale/sharing of personal information) and §1798.100 (right to know), enforced by the California Privacy Protection Agency (CPPA) and California AG. UK GDPR and the Data Protection Act 2018 apply to UK users, enforced by the ICO. The FTC Act Section 5 is relevant to whether the opt-out mechanism constitutes a deceptive practice given the scope of carve-outs. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.