Anthropic · Anthropic API Terms · View original document ↗

AI Model Training Data Use and Opt-Out

Medium severity High confidence Explicitdocumentlanguage Rare · 1 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Anthropic Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Anthropic may use your conversations with Claude to train its AI models by default. You can turn this off in your account settings, but even if you do, your conversations can still be used for training if you rate a response (thumbs up or down) or if Anthropic's systems flag your content for safety review.

This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

The opt-out mechanism does not fully prevent your conversations from being used to train AI models, because two significant carve-outs apply regardless of your settings choice.

Consumer impact (what this means for users)

If you give feedback on any Claude response or if your content triggers a safety review, that conversation may be used to train Anthropic's AI models even if you have opted out of training in your account settings. Users who regularly rate outputs should be aware that doing so constitutes a training data consent bypass under these terms.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Opt Out of Arbitration
    Log into your Claude.ai account, navigate to account settings, and locate the model training opt-out toggle. Enable the opt-out to prevent your conversations from being used for AI training, noting that feedback interactions and safety-flagged content remain subject to training use.

How other platforms handle this

Windsurf Medium

We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...

Writer Medium

Writer does not use Customer Data to train its AI models without explicit customer permission. Customer Data means the data, content, and information that customers and their end users submit to or through the Services.

Ideogram Medium

We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.

See all platforms with this clause type →

Monitoring

Anthropic has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
We may use Materials to provide, maintain, and improve the Services and to develop other products and services, including training our models, unless you opt out of training through your account settings. Even if you opt out, we will use Materials for model training when: (1) you provide Feedback to us regarding any Materials, or (2) your Materials are flagged for safety review to improve our ability to detect harmful content, enforce our policies, or advance our safety research.

— Excerpt from Anthropic's Anthropic API Terms

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: This provision engages GDPR Articles 6 and 9 regarding lawful basis for processing personal data for AI training purposes, and Article 22 regarding automated processing. The UK GDPR imposes equivalent obligations. The CCPA grants California residents rights to know about and limit use of personal information, which would cover conversation data used for training. The FTC Act's prohibition on unfair or deceptive practices is relevant to whether the carve-outs are sufficiently disclosed to constitute informed consent. Applicable law in the EU may require affirmative opt-in consent for training use of personal data, creating a potential tension with this provision's default opt-in structure. GOVERNANCE EXPOSURE: High. The feedback-linked carve-out is operationally significant because users who interact with rating features may not understand they are triggering a training data permission. The safety-review carve-out is broad and self-defined, with no user notification mechanism described. Both carve-outs require evaluation under GDPR lawful basis frameworks if deployed to EU users. JURISDICTION FLAGS: EU and UK users face the highest exposure given GDPR and UK GDPR consent requirements for AI training use of personal data. California residents retain CCPA rights to know and limit. Illinois and other US states with biometric or sensitive data laws may create additional exposure depending on content types submitted. Jurisdictions requiring opt-in consent for AI training data use may find the default opt-in structure requires modification. CONTRACT AND VENDOR IMPLICATIONS: Enterprises deploying Claude.ai for employee use should assess whether employee conversation data used for training creates employment law obligations or conflicts with internal data governance policies. Vendor contracts that incorporate Claude.ai outputs should account for the possibility that employee inputs contributed to model training regardless of organizational opt-out preferences, as the safety review carve-out operates at Anthropic's discretion. COMPLIANCE CONSIDERATIONS: Legal and privacy teams should map the feedback interaction mechanism against consent records to determine whether rating interactions constitute disclosed and lawful training consent. Data mapping exercises should distinguish between opted-out conversation data and feedback or safety-flagged data. For EU deployments, a Data Protection Impact Assessment may be warranted given the automated processing of personal data for model training. Privacy notices should be reviewed to confirm the carve-outs are adequately disclosed to users.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over unfair or deceptive practices in data collection and use disclosures, including whether the training opt-out carve-outs are adequately disclosed to consumers.
    File a complaint →

Applicable regulations

EU AI Act
European Union
California AB 2013 AI Training Data Transparency
US-CA
Colorado AI Act
US-CO
EU AI Act - High Risk Provisions
EU
GDPR
European Union
Texas AI Act
Texas, USA
Trump Executive Order on AI Policy Framework
US
UK GDPR
United Kingdom

Provision details

Document information
Document
Anthropic API Terms
Entity
Anthropic
Document last updated
May 5, 2026
Tracking information
First tracked
May 8, 2026
Last verified
May 10, 2026
Record ID
CA-P-009793
Document ID
CA-D-00644
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
76b3ec7295fe5abd7a14cd2bc45c46e3b7dd9a66ea991a2455e2ef95f735e820
Analysis generated
May 8, 2026 10:56 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Anthropic
Document: Anthropic API Terms
Record ID: CA-P-009793
Captured: 2026-05-08 10:56:14 UTC
SHA-256: 76b3ec7295fe5abd…
URL: https://conductatlas.com/platform/anthropic/anthropic-api-terms/ai-model-training-data-use-and-opt-out/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Related Analysis

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Anthropic's AI Model Training Data Use and Opt-Out clause do?

The opt-out mechanism does not fully prevent your conversations from being used to train AI models, because two significant carve-outs apply regardless of your settings choice.

How does this clause affect you?

If you give feedback on any Claude response or if your content triggers a safety review, that conversation may be used to train Anthropic's AI models even if you have opted out of training in your account settings. Users who regularly rate outputs should be aware that doing so constitutes a training data consent bypass under these terms.

How many platforms have this type of clause?

ConductAtlas has identified this type of provision across 1 platforms. See the full comparison.

Is ConductAtlas affiliated with Anthropic?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.