Anthropic · Anthropic Privacy Policy · View original document ↗

Model Training Opt-Out with Safety Review Carve-Out

Medium severity High confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Anthropic Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.

This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

The safety review carve-out means your opt-out is not absolute, and conversations that Anthropic internally flags for safety purposes can be retained and used for model training regardless of your preference.

Consumer impact (what this means for users)

The policy states that Inputs (your messages to Claude) and Outputs (Claude's responses) may be used to train Anthropic's AI models, and that this use continues even after opting out if the conversation is flagged for safety review or if you have explicitly submitted feedback about it. Device identifiers, IP address-derived location data, browsing activity, and usage information are also collected automatically. You can opt out of conversation use for model training by accessing your account settings at Claude.ai.

How other platforms handle this

Ideogram Medium

We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.

Windsurf Medium

We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...

Character.AI Medium

Users under 18 years old interact with an age-appropriate model specifically designed to reduce the likelihood of exposure to sensitive or suggestive content. Our under-18 model has additional and more conservative classifiers than the model for our adult users so we can enforce our content policies...

See all platforms with this clause type →

Monitoring

Anthropic has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
We may use your Inputs and Outputs to train our models and improve our Services, unless you opt out through your account settings. Even if you opt-out, we will use Inputs and Outputs for model improvement when: (1) your conversations are flagged for safety review to improve our ability to detect harmful content, enforce our policies, or advance AI safety research, or (2) you've explicitly reported the materials to us (for example via our feedback mechanisms).

— Excerpt from Anthropic's Anthropic Privacy Policy

Applicable regulations

EU AI Act
European Union
California AB 2013 AI Training Data Transparency
US-CA
Colorado AI Act
US-CO
EU AI Act - High Risk Provisions
EU
GDPR
European Union
Texas AI Act
Texas, USA
Trump Executive Order on AI Policy Framework
US
UK GDPR
United Kingdom

Provision details

Document information
Document
Anthropic Privacy Policy
Entity
Anthropic
Document last updated
May 5, 2026
Tracking information
First tracked
May 9, 2026
Last verified
May 12, 2026
Record ID
CA-P-008335
Document ID
CA-D-00012
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
20bca03faeb6eca729c8a9ece674a093b027618cf9e96f1e0a652dcaef888ca9
Analysis generated
May 9, 2026 14:50 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Anthropic
Document: Anthropic Privacy Policy
Record ID: CA-P-008335
Captured: 2026-05-09 14:50:44 UTC
SHA-256: 20bca03faeb6eca7…
URL: https://conductatlas.com/platform/anthropic/anthropic-privacy-policy/model-training-opt-out-with-safety-review-carve-out/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Related Analysis

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Anthropic's Model Training Opt-Out with Safety Review Carve-Out clause do?

The safety review carve-out means your opt-out is not absolute, and conversations that Anthropic internally flags for safety purposes can be retained and used for model training regardless of your preference.

Is ConductAtlas affiliated with Anthropic?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.