Anthropic · Claude.ai Terms of Service · View original document ↗

AI Model Training Use of Materials

High severity High confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Anthropic Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Anthropic can use your conversations with Claude, including what you type and what Claude responds, to train its AI models by default. You can turn this off in settings, but your conversations can still be used for training if you give feedback (like thumbs up/down) or if your content is flagged for a safety review.

This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision means that even users who opt out of training cannot fully prevent their conversation data from being used in AI model development under certain circumstances, which has implications for personal data shared in conversations.

Consumer impact (what this means for users)

Your conversation content, including sensitive personal information you may share, can be used to train Anthropic's AI models by default, and opting out does not cover situations where you submit feedback or your content triggers a safety review.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Opt Out of Arbitration
    Log into your Claude account, navigate to account settings, and locate the Privacy section to find the model training opt-out toggle. Enable the opt-out to prevent default use of your conversations for training.

How other platforms handle this

Windsurf Medium

We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...

Ideogram Medium

We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.

Netflix Medium

(ix) engage in any of the foregoing in connection with the use, creation, development, modification, prompting, fine-tuning, training, testing, benchmarking or validation of any machine learning tool, model, system, algorithm, product or other technology.

See all platforms with this clause type →

Monitoring

Anthropic has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
We may use Materials to provide, maintain, and improve the Services and to develop other products and services, including training our models, unless you opt out of training through your account settings. Even if you opt out, we will use Materials for model training when: (1) you provide Feedback to us regarding any Materials, or (2) your Materials are flagged for safety review to improve our ability to detect harmful content, enforce our policies, or advance our safety research.

— Excerpt from Anthropic's Claude.ai Terms of Service

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: This provision implicates GDPR Article 6 (lawful basis for processing) and Article 5 (purpose limitation and data minimisation) for EU and UK users, as use of conversation data for model training may require a distinct lawful basis from service delivery. The carve-outs for safety review and feedback may require evaluation as separate processing activities. Enforcement authority includes data protection authorities in EU member states and the UK ICO. CCPA may require disclosure and opt-out rights for California residents regarding use of personal information for training. FTC Act authority is relevant to whether the opt-out mechanism is clearly disclosed and functional. GOVERNANCE EXPOSURE: High. The training use of inputs and outputs constitutes processing of potentially sensitive personal data at scale. The carve-outs embedded in the opt-out mechanism mean the opt-out is not a complete data use restriction, which may not be apparent to average consumers. The adequacy of notice and the accessibility of the opt-out mechanism are compliance-critical points. JURISDICTION FLAGS: EU and UK users face heightened exposure given GDPR and UK GDPR purpose limitation and consent requirements. California residents have CCPA rights regarding use of personal information that may interact with this provision. The breadth of the safety review carve-out may be difficult to operationalize transparently across jurisdictions. CONTRACT AND VENDOR IMPLICATIONS: Enterprise procurement teams should assess whether employee conversation data processed under personal accounts is subject to this training use, and whether the business domain account linking provision intersects with employer data governance obligations. B2B contracts that incorporate consumer-facing services should flag this provision for data processing agreement review. COMPLIANCE CONSIDERATIONS: Legal teams should verify that the opt-out mechanism is technically implemented and accessible, document the lawful basis for training use in GDPR-required records of processing activities, and assess whether the safety review and feedback carve-outs are adequately disclosed in privacy notices. Data mapping should account for the residual training data pathway that persists post opt-out.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    FTC has authority over unfair or deceptive practices related to consumer data use disclosures and the clarity of opt-out mechanisms for AI training
    File a complaint →

Applicable regulations

EU AI Act
European Union
California AB 2013 AI Training Data Transparency
US-CA
Colorado AI Act
US-CO
EU AI Act - High Risk Provisions
EU
GDPR
European Union
Texas AI Act
Texas, USA
Trump Executive Order on AI Policy Framework
US
UK GDPR
United Kingdom

Provision details

Document information
Document
Claude.ai Terms of Service
Entity
Anthropic
Document last updated
May 5, 2026
Tracking information
First tracked
May 9, 2026
Last verified
May 10, 2026
Record ID
CA-P-009315
Document ID
CA-D-00011
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
b10681ed0556f33fd77bdd0ca8d5a1d1e02616dab9696dadd177f042a3770d68
Analysis generated
May 9, 2026 14:35 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Anthropic
Document: Claude.ai Terms of Service
Record ID: CA-P-009315
Captured: 2026-05-09 14:35:38 UTC
SHA-256: b10681ed0556f33f…
URL: https://conductatlas.com/platform/anthropic/claudeai-terms-of-service/ai-model-training-use-of-materials/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Related Analysis

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Anthropic's AI Model Training Use of Materials clause do?

This provision means that even users who opt out of training cannot fully prevent their conversation data from being used in AI model development under certain circumstances, which has implications for personal data shared in conversations.

How does this clause affect you?

Your conversation content, including sensitive personal information you may share, can be used to train Anthropic's AI models by default, and opting out does not cover situations where you submit feedback or your content triggers a safety review.

Is ConductAtlas affiliated with Anthropic?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.