Anthropic · Anthropic Privacy Policy

Model Training Opt-Out with Safety Review Override

High severity
Share 𝕏 Share in Share 🔒 PDF

What it is

By default, everything you type to Claude and every response you receive can be used to train Anthropic's AI. You can turn this off in your account settings, but Anthropic can still use your conversations for training if it decides to flag them for safety review.

Consumer impact (what this means for users)

Your conversation content — including potentially sensitive personal information you share with Claude — can be used to train Anthropic's AI models by default, and even after opting out, Anthropic may still use flagged conversations, meaning your opt-out does not provide a complete guarantee that your data will not be used for training.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Opt Out of Arbitration
    Log into your Claude.ai account, navigate to Account Settings, and locate the data and privacy section to disable the option allowing Anthropic to use your conversations for model training.

Cross-platform context

See how other platforms handle Model Training Opt-Out with Safety Review Override and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

The opt-out mechanism has a significant exception that Anthropic controls unilaterally: any conversation it labels as a 'safety review' remains eligible for training regardless of your preference, which substantially limits the practical value of the opt-out.

View original clause language
We may use your Inputs and Outputs to train our models and improve our Services, unless you opt out through your account settings. Even if you opt-out, we will use Inputs and Outputs for model improvement when: (1) your conversations are flagged for safety review to improve our ability to detect harmful content, enforce our policies, or advance AI safety research, or (2) you've explicitly reported the materials to us (for example via our feedback mechanisms).

Institutional analysis (Compliance & legal intelligence)

1. REGULATORY FRAMEWORK: This provision implicates GDPR Art. 6(1)(f) (legitimate interests as lawful basis for training), Art. 21 (right to object to legitimate-interest processing), and Art. 22 (automated decision-making). Under CCPA/CPRA Cal. Civ. Code §1798.120 (right to opt out of sale/sharing) and §1798.105 (right to deletion), the carve-out for safety-flagged data may constitute an impermissible limitation. Brazil LGPD Art. 18 (data subject rights) and South Korea PIPA Art. 37 (right to suspend processing) are also engaged. The EU AI Act (Regulation 2024/1689) imposes transparency obligations on GPAI model providers regarding training data, relevant to Anthropic's model development practices. Primary enforcement authorities: EDPB/national DPAs (EU), CPPA (California), ANPD (Brazil), PIPC (South Korea). 2.

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    The FTC has authority under Section 5 of the FTC Act over unfair or deceptive data practices, including AI training data use representations and the scope of opt-out mechanisms.
    File a complaint →

Provision details

Document information
Document
Anthropic Privacy Policy
Entity
Anthropic
Document last updated
April 29, 2026
Tracking information
First tracked
March 6, 2026
Last verified
April 28, 2026
Record ID
CA-P-003862
Document ID
CA-D-00012
Evidence Provenance
Source URL
Wayback Machine
SHA-256
55f589f5c2a5a187a9d045dc6c7e4954a2dbf9ac00fb6e3ea782dbcf9ad69387
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Anthropic | Document: Anthropic Privacy Policy | Record: CA-P-003862
Captured: 2026-03-06 20:00:36 UTC | SHA-256: 55f589f5c2a5a187…
URL: https://conductatlas.com/platform/anthropic/anthropic-privacy-policy/model-training-opt-out-with-safety-review-override/
Accessed: April 29, 2026
Classification
Severity
High
Categories

Other provisions in this document