Anthropic · Anthropic Consumer Terms · View original document ↗

Model Training Opt-Out with Feedback and Safety Exceptions

Medium severity High confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Anthropic Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

By default, your conversations with Claude can be used to train Anthropic's AI. You can opt out in account settings, but any conversation you rate (thumbs up or down) or that gets flagged for safety reasons can still be used for training even if you opted out.

This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

The opt-out does not provide a complete exclusion from model training; two specific categories of conversations remain eligible for training use regardless of the opt-out setting, which affects the practical scope of the privacy control offered.

Consumer impact (what this means for users)

Consumers who opt out of model training should be aware that conversations they rate using the feedback interface, or conversations flagged by Anthropic's safety systems, remain available for training use under these terms. The opt-out applies only to conversations that do not fall into these two categories.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Opt Out of Arbitration
    Log into your Claude.ai account, navigate to account settings, and locate the model training opt-out toggle. Enable the opt-out to prevent default training use of your conversations (note: this does not apply to feedback submissions or safety-flagged conversations).

How other platforms handle this

Ideogram Medium

We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.

Windsurf Medium

We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...

Character.AI Medium

Users under 18 years old interact with an age-appropriate model specifically designed to reduce the likelihood of exposure to sensitive or suggestive content. Our under-18 model has additional and more conservative classifiers than the model for our adult users so we can enforce our content policies...

See all platforms with this clause type →

Monitoring

Anthropic has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
We may use Materials to provide, maintain, and improve the Services and to develop other products and services, including training our models, unless you opt out of training through your account settings. Even if you opt out, we will use Materials for model training when: (1) you provide Feedback to us regarding any Materials, or (2) your Materials are flagged for safety review to improve our ability to detect harmful content, enforce our policies, or advance our safety research.

— Excerpt from Anthropic's Anthropic Consumer Terms

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: This provision implicates GDPR Articles 6 and 9 (legal basis for processing personal data for model training), Article 7 (conditions for consent), and Recital 32 regarding granularity of consent. The carve-out for safety-flagged content and feedback raises questions about whether the opt-out mechanism constitutes meaningful consent withdrawal under GDPR, which the relevant supervisory authorities (EU data protection authorities, UK ICO) may evaluate. CCPA is also relevant regarding whether model training constitutes a 'sale' or 'sharing' of personal information. The EU AI Act may engage requirements around training data transparency for general-purpose AI models. (2) GOVERNANCE EXPOSURE: Medium-High. The feedback and safety review carve-outs mean that the opt-out mechanism does not align with a full right to object to processing under GDPR Article 21, creating potential tension with EU and UK data protection frameworks. The broad framing of 'safety review' is not defined with specificity in the document, leaving the scope of that exception uncertain. (3) JURISDICTION FLAGS: EU and UK users face heightened exposure because GDPR and UK GDPR impose specific requirements on the legal basis for processing personal data for AI training purposes, and the adequacy of an opt-out (versus affirmative consent or legitimate interests assessment) is actively scrutinized by European DPAs. California residents may have rights under CCPA to know and limit use of personal information. The provision's application in jurisdictions with sector-specific AI regulations warrants monitoring. (4) CONTRACT AND VENDOR IMPLICATIONS: Organizations using Claude.ai under corporate accounts and relying on this consumer ToS should note that the training opt-out exceptions may not satisfy GDPR Article 28 data processing agreement requirements if employee data is processed. Procurement teams should evaluate whether a Data Processing Addendum is available and whether the safety flagging exception is consistent with data minimization obligations. (5) COMPLIANCE CONSIDERATIONS: Compliance teams should document the scope of the opt-out limitation in their records of processing activities. If downstream products are built on Claude.ai, privacy notices should accurately reflect that feedback-related and safety-flagged interactions may be used for training. Legal teams in EU/UK jurisdictions should assess whether the current legal basis for training on flagged content is clearly established in Anthropic's privacy documentation.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has jurisdiction over unfair or deceptive data practices affecting consumers, including representations about user controls over personal data used for AI training.
    File a complaint →

Applicable regulations

EU AI Act
European Union
California AB 2013 AI Training Data Transparency
US-CA
Colorado AI Act
US-CO
EU AI Act - High Risk Provisions
EU
GDPR
European Union
Texas AI Act
Texas, USA
Trump Executive Order on AI Policy Framework
US
UK GDPR
United Kingdom

Provision details

Document information
Document
Anthropic Consumer Terms
Entity
Anthropic
Document last updated
May 12, 2026
Tracking information
First tracked
May 12, 2026
Last verified
May 12, 2026
Record ID
CA-P-011793
Document ID
CA-D-00785
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
66d87fe1684016e22c68038645304344ee2e8d3094611804048e223495320d61
Analysis generated
May 12, 2026 15:09 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Anthropic
Document: Anthropic Consumer Terms
Record ID: CA-P-011793
Captured: 2026-05-12 15:09:41 UTC
SHA-256: 66d87fe1684016e2…
URL: https://conductatlas.com/platform/anthropic/anthropic-consumer-terms/model-training-opt-out-with-feedback-and-safety-exceptions/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Related Analysis

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Anthropic's Model Training Opt-Out with Feedback and Safety Exceptions clause do?

The opt-out does not provide a complete exclusion from model training; two specific categories of conversations remain eligible for training use regardless of the opt-out setting, which affects the practical scope of the privacy control offered.

How does this clause affect you?

Consumers who opt out of model training should be aware that conversations they rate using the feedback interface, or conversations flagged by Anthropic's safety systems, remain available for training use under these terms. The opt-out applies only to conversations that do not fall into these two categories.

Is ConductAtlas affiliated with Anthropic?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.