Anthropic · Anthropic Privacy Policy · View original document ↗

Model Training Opt-Out with Safety-Flagging Carve-Out

Medium severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Anthropic Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Anthropic may use your chat messages to train its AI, but you can turn this off in settings. However, even after opting out, if a conversation gets flagged for safety reasons or you reported it yourself, it can still be used for training.

This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

The opt-out does not provide complete exclusion from model training use: the policy reserves the right to use flagged conversations regardless of a user's opt-out preference, and the criteria for safety flagging are not defined with operational specificity in the document.

Interpretive note: The criteria and operational process for safety flagging are not defined in the document, creating ambiguity about the practical scope of the carve-out and the extent to which the opt-out is effective.

Consumer impact (what this means for users)

Users who opt out of model training use should be aware that the policy permits continued use of their Inputs and Outputs when those conversations are flagged for safety review, which means the opt-out does not guarantee full exclusion of personal conversation content from model training.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Opt Out of Arbitration
    Log into Claude.ai, navigate to account settings, and locate the model training or privacy controls section to opt out of conversation use for model training.

How other platforms handle this

Ideogram Medium

We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.

Windsurf Medium

We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...

Character.AI Medium

Users under 18 years old interact with an age-appropriate model specifically designed to reduce the likelihood of exposure to sensitive or suggestive content. Our under-18 model has additional and more conservative classifiers than the model for our adult users so we can enforce our content policies...

See all platforms with this clause type →

Monitoring

Anthropic has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
We may use your Inputs and Outputs to train our models and improve our Services, unless you opt out through your account settings. Even if you opt-out, we will use Inputs and Outputs for model improvement when: (1) your conversations are flagged for safety review to improve our ability to detect harmful content, enforce our policies, or advance AI safety research, or (2) you've explicitly reported the materials to us (for example via our feedback mechanisms).

— Excerpt from Anthropic's Anthropic Privacy Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: This provision engages GDPR Article 6 lawful basis requirements and Article 21 objection rights, enforced by EU Member State supervisory authorities; CCPA opt-out requirements enforced by the California Privacy Protection Agency; and Brazilian LGPD consent and legitimate interest provisions enforced by ANPD. The carve-out permitting training use of safety-flagged conversations after opt-out may require evaluation under GDPR as to whether legitimate interest or another lawful basis adequately supports processing over a user's objection. (2) GOVERNANCE EXPOSURE: Medium. The provision creates a documented exception to the opt-out that is defined by an internal operational criterion (safety flagging) whose scope is not specified in the policy. This creates potential ambiguity about the practical effect of the opt-out mechanism, which may be scrutinized by regulators evaluating whether the opt-out is meaningful under applicable law. (3) JURISDICTION FLAGS: EU/EEA users have heightened exposure given GDPR Article 21 objection rights and Article 6 lawful basis requirements. California residents may evaluate whether the opt-out mechanism satisfies CPRA requirements. Brazilian and South Korean users are subject to their respective national frameworks, which may impose additional constraints on legitimate interest processing. (4) CONTRACT AND VENDOR IMPLICATIONS: Organizations deploying Claude on behalf of employees or end users under Commercial Services agreements should assess whether this carve-out affects their own data processing obligations and whether their data processing agreements with Anthropic address the treatment of safety-flagged content. (5) COMPLIANCE CONSIDERATIONS: Compliance teams should evaluate whether the safety-flagging carve-out is disclosed with sufficient specificity to satisfy notice requirements under applicable law, and whether the opt-out mechanism is implemented in a manner consistent with GDPR, CCPA, and LGPD opt-out obligations. Documentation of the lawful basis relied upon for processing flagged conversations after opt-out should be reviewed.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has jurisdiction over unfair or deceptive trade practices related to privacy representations, including whether opt-out mechanisms function as disclosed.
    File a complaint →

Applicable regulations

EU AI Act
European Union
California AB 2013 AI Training Data Transparency
US-CA
Colorado AI Act
US-CO
EU AI Act - High Risk Provisions
EU
GDPR
European Union
Texas AI Act
Texas, USA
Trump Executive Order on AI Policy Framework
US
UK GDPR
United Kingdom

Provision details

Document information
Document
Anthropic Privacy Policy
Entity
Anthropic
Document last updated
May 5, 2026
Tracking information
First tracked
May 9, 2026
Last verified
May 12, 2026
Record ID
CA-P-011307
Document ID
CA-D-00012
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
20bca03faeb6eca729c8a9ece674a093b027618cf9e96f1e0a652dcaef888ca9
Analysis generated
May 9, 2026 14:50 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Anthropic
Document: Anthropic Privacy Policy
Record ID: CA-P-011307
Captured: 2026-05-09 14:50:44 UTC
SHA-256: 20bca03faeb6eca7…
URL: https://conductatlas.com/platform/anthropic/anthropic-privacy-policy/model-training-opt-out-with-safety-flagging-carve-out/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Related Analysis

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Anthropic's Model Training Opt-Out with Safety-Flagging Carve-Out clause do?

The opt-out does not provide complete exclusion from model training use: the policy reserves the right to use flagged conversations regardless of a user's opt-out preference, and the criteria for safety flagging are not defined with operational specificity in the document.

How does this clause affect you?

Users who opt out of model training use should be aware that the policy permits continued use of their Inputs and Outputs when those conversations are flagged for safety review, which means the opt-out does not guarantee full exclusion of personal conversation content from model training.

Is ConductAtlas affiliated with Anthropic?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.