This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
The safety review carve-out means your opt-out is not absolute, and conversations that Anthropic internally flags for safety purposes can be retained and used for model training regardless of your preference.
The policy states that Inputs (your messages to Claude) and Outputs (Claude's responses) may be used to train Anthropic's AI models, and that this use continues even after opting out if the conversation is flagged for safety review or if you have explicitly submitted feedback about it. Device identifiers, IP address-derived location data, browsing activity, and usage information are also collected automatically. You can opt out of conversation use for model training by accessing your account settings at Claude.ai.
How other platforms handle this
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
Users under 18 years old interact with an age-appropriate model specifically designed to reduce the likelihood of exposure to sensitive or suggestive content. Our under-18 model has additional and more conservative classifiers than the model for our adult users so we can enforce our content policies...
Monitoring
Anthropic has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We may use your Inputs and Outputs to train our models and improve our Services, unless you opt out through your account settings. Even if you opt-out, we will use Inputs and Outputs for model improvement when: (1) your conversations are flagged for safety review to improve our ability to detect harmful content, enforce our policies, or advance AI safety research, or (2) you've explicitly reported the materials to us (for example via our feedback mechanisms).— Excerpt from Anthropic's Anthropic Privacy Policy
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
The safety review carve-out means your opt-out is not absolute, and conversations that Anthropic internally flags for safety purposes can be retained and used for model training regardless of your preference.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.