Anthropic · Anthropic Privacy Policy

AI Model Training Using Conversation Data

High severity
Share 𝕏 Share in Share

What it is

We may use your Inputs and Outputs to train our models and improve our Services, unless you opt out through your account settings. Even if you opt-out, we will use Inputs and Outputs for model improvement when: (1) your conversations are flagged for safety review to improve our ability to detect harmful content, enforce our policies, or advance AI safety research, or (2) you've explicitly reported the materials to us (for example via our feedback mechanisms).

Why it matters

The safety-review exception means your opt-out does not fully protect your conversations from being used in AI training, which is a meaningful limitation that may not be obvious to most users.

Consumer impact

Anthropic collects your conversation content (prompts and AI responses), device identifiers, browsing behavior, and any personal data you include in messages to Claude, and may use this data to train its AI models. Users who opt out of model training should be aware that conversations flagged for safety or policy review can still be used for training without their consent, representing a meaningful limitation on the opt-out right. You can opt out of having your conversations used for model training by navigating to your Claude.ai account settings, or submit a data deletion request by emailing privacy@anthropic.com.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Opt Out of Arbitration
    Log in to your Claude.ai account, navigate to Account Settings, and locate the privacy or data controls section to toggle off use of your conversations for model training.

Applicable agencies

  • FTC
    The safety-review carve-out to the opt-out may constitute an unfair or deceptive practice under FTC Act Section 5 if not adequately disclosed to consumers.
    File a complaint →

Provision details

Document information
Document
Anthropic Privacy Policy
Entity
Anthropic
Document last updated
March 24, 2026
Tracking information
First tracked
March 6, 2026
Last verified
April 4, 2026
Record ID
CA-P-002125
Document ID
CA-D-00012
Evidence Provenance
Source URL
Wayback Machine
SHA-256
55f589f5c2a5a187a9d045dc6c7e4954a2dbf9ac00fb6e3ea782dbcf9ad69387
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Anthropic | Document: Anthropic Privacy Policy | Record: CA-P-002125
Captured: 2026-03-06 20:00:36 UTC | SHA-256: 55f589f5c2a5a187…
URL: https://conductatlas.com/platform/anthropic/anthropic-privacy-policy/ai-model-training-using-conversation-data/
Accessed: April 4, 2026
Classification
Severity
High
Categories

Other provisions in this document