Anthropic · Anthropic Privacy Policy

Safety Review Exception to Training Opt-Out

High severity
Share 𝕏 Share in Share

Why it matters

This creates a significant exception to the opt-out right — any conversation Anthropic deems relevant to safety can still be retained and used for training, reducing the practical value of the opt-out.

Consumer impact

Anthropic collects your conversation inputs and outputs, device data, and usage information, and may use that data to train its AI models unless you opt out. Even after opting out, Anthropic retains the right to use your conversations for training if they are flagged for safety review. You can opt out of having your conversations used for model training by adjusting your account settings at claude.ai.

Applicable agencies

  • FTC
    Overriding consumer opt-out choices through broadly defined exceptions may constitute an unfair or deceptive trade practice under Section 5 of the FTC Act.
    File a complaint →

Provision details

Document information
Document
Anthropic Privacy Policy
Entity
Anthropic
Document last updated
March 24, 2026
Tracking information
First tracked
March 15, 2026
Last verified
March 15, 2026
Record ID
CA-P-000103
Document ID
CA-D-00012
Evidence Provenance
Source URL
Wayback Machine
SHA-256
20bca03faeb6eca729c8a9ece674a093b027618cf9e96f1e0a652dcaef888ca9
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Anthropic | Document: Anthropic Privacy Policy | Record: CA-P-000103
Captured: 2026-03-15 12:21:12 UTC | SHA-256: 20bca03faeb6eca7…
URL: https://conductatlas.com/platform/anthropic/anthropic-privacy-policy/safety-review-exception-to-training-opt-out/
Accessed: April 4, 2026
Classification
Severity
High
Categories

Other provisions in this document