Anthropic may use your chat messages to train its AI, but you can turn this off in settings. However, even after opting out, if a conversation gets flagged for safety reasons or you reported it yourself, it can still be used for training.
This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
The opt-out does not provide complete exclusion from model training use: the policy reserves the right to use flagged conversations regardless of a user's opt-out preference, and the criteria for safety flagging are not defined with operational specificity in the document.
Interpretive note: The criteria and operational process for safety flagging are not defined in the document, creating ambiguity about the practical scope of the carve-out and the extent to which the opt-out is effective.
Users who opt out of model training use should be aware that the policy permits continued use of their Inputs and Outputs when those conversations are flagged for safety review, which means the opt-out does not guarantee full exclusion of personal conversation content from model training.
How other platforms handle this
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
Users under 18 years old interact with an age-appropriate model specifically designed to reduce the likelihood of exposure to sensitive or suggestive content. Our under-18 model has additional and more conservative classifiers than the model for our adult users so we can enforce our content policies...
Monitoring
Anthropic has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We may use your Inputs and Outputs to train our models and improve our Services, unless you opt out through your account settings. Even if you opt-out, we will use Inputs and Outputs for model improvement when: (1) your conversations are flagged for safety review to improve our ability to detect harmful content, enforce our policies, or advance AI safety research, or (2) you've explicitly reported the materials to us (for example via our feedback mechanisms).— Excerpt from Anthropic's Anthropic Privacy Policy
(1) REGULATORY LANDSCAPE: This provision engages GDPR Article 6 lawful basis requirements and Article 21 objection rights, enforced by EU Member State supervisory authorities; CCPA opt-out requirements enforced by the California Privacy Protection Agency; and Brazilian LGPD consent and legitimate interest provisions enforced by ANPD. The carve-out permitting training use of safety-flagged conversations after opt-out may require evaluation under GDPR as to whether legitimate interest or another lawful basis adequately supports processing over a user's objection. (2) GOVERNANCE EXPOSURE: Medium. The provision creates a documented exception to the opt-out that is defined by an internal operational criterion (safety flagging) whose scope is not specified in the policy. This creates potential ambiguity about the practical effect of the opt-out mechanism, which may be scrutinized by regulators evaluating whether the opt-out is meaningful under applicable law. (3) JURISDICTION FLAGS: EU/EEA users have heightened exposure given GDPR Article 21 objection rights and Article 6 lawful basis requirements. California residents may evaluate whether the opt-out mechanism satisfies CPRA requirements. Brazilian and South Korean users are subject to their respective national frameworks, which may impose additional constraints on legitimate interest processing. (4) CONTRACT AND VENDOR IMPLICATIONS: Organizations deploying Claude on behalf of employees or end users under Commercial Services agreements should assess whether this carve-out affects their own data processing obligations and whether their data processing agreements with Anthropic address the treatment of safety-flagged content. (5) COMPLIANCE CONSIDERATIONS: Compliance teams should evaluate whether the safety-flagging carve-out is disclosed with sufficient specificity to satisfy notice requirements under applicable law, and whether the opt-out mechanism is implemented in a manner consistent with GDPR, CCPA, and LGPD opt-out obligations. Documentation of the lawful basis relied upon for processing flagged conversations after opt-out should be reviewed.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
The opt-out does not provide complete exclusion from model training use: the policy reserves the right to use flagged conversations regardless of a user's opt-out preference, and the criteria for safety flagging are not defined with operational specificity in the document.
Users who opt out of model training use should be aware that the policy permits continued use of their Inputs and Outputs when those conversations are flagged for safety review, which means the opt-out does not guarantee full exclusion of personal conversation content from model training.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.