Anthropic can use your conversations with Claude, including what you type and what Claude responds, to train its AI models by default. You can turn this off in settings, but your conversations can still be used for training if you give feedback (like thumbs up/down) or if your content is flagged for a safety review.
This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision means that even users who opt out of training cannot fully prevent their conversation data from being used in AI model development under certain circumstances, which has implications for personal data shared in conversations.
Your conversation content, including sensitive personal information you may share, can be used to train Anthropic's AI models by default, and opting out does not cover situations where you submit feedback or your content triggers a safety review.
How other platforms handle this
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
(ix) engage in any of the foregoing in connection with the use, creation, development, modification, prompting, fine-tuning, training, testing, benchmarking or validation of any machine learning tool, model, system, algorithm, product or other technology.
Monitoring
Anthropic has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We may use Materials to provide, maintain, and improve the Services and to develop other products and services, including training our models, unless you opt out of training through your account settings. Even if you opt out, we will use Materials for model training when: (1) you provide Feedback to us regarding any Materials, or (2) your Materials are flagged for safety review to improve our ability to detect harmful content, enforce our policies, or advance our safety research.— Excerpt from Anthropic's Claude.ai Terms of Service
REGULATORY LANDSCAPE: This provision implicates GDPR Article 6 (lawful basis for processing) and Article 5 (purpose limitation and data minimisation) for EU and UK users, as use of conversation data for model training may require a distinct lawful basis from service delivery. The carve-outs for safety review and feedback may require evaluation as separate processing activities. Enforcement authority includes data protection authorities in EU member states and the UK ICO. CCPA may require disclosure and opt-out rights for California residents regarding use of personal information for training. FTC Act authority is relevant to whether the opt-out mechanism is clearly disclosed and functional. GOVERNANCE EXPOSURE: High. The training use of inputs and outputs constitutes processing of potentially sensitive personal data at scale. The carve-outs embedded in the opt-out mechanism mean the opt-out is not a complete data use restriction, which may not be apparent to average consumers. The adequacy of notice and the accessibility of the opt-out mechanism are compliance-critical points. JURISDICTION FLAGS: EU and UK users face heightened exposure given GDPR and UK GDPR purpose limitation and consent requirements. California residents have CCPA rights regarding use of personal information that may interact with this provision. The breadth of the safety review carve-out may be difficult to operationalize transparently across jurisdictions. CONTRACT AND VENDOR IMPLICATIONS: Enterprise procurement teams should assess whether employee conversation data processed under personal accounts is subject to this training use, and whether the business domain account linking provision intersects with employer data governance obligations. B2B contracts that incorporate consumer-facing services should flag this provision for data processing agreement review. COMPLIANCE CONSIDERATIONS: Legal teams should verify that the opt-out mechanism is technically implemented and accessible, document the lawful basis for training use in GDPR-required records of processing activities, and assess whether the safety review and feedback carve-outs are adequately disclosed in privacy notices. Data mapping should account for the residual training data pathway that persists post opt-out.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision means that even users who opt out of training cannot fully prevent their conversation data from being used in AI model development under certain circumstances, which has implications for personal data shared in conversations.
Your conversation content, including sensitive personal information you may share, can be used to train Anthropic's AI models by default, and opting out does not cover situations where you submit feedback or your content triggers a safety review.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.