OpenAI · OpenAI Privacy Policy · View original document ↗

AI Model Training Use of Conversation Content

High severity High confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Recent governance activity OpenAI recorded 5 documented changes in the last 30 days.
Start monitoring updates
Monitor governance changes for OpenAI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

OpenAI states that content you submit through consumer products like ChatGPT may be used to train its AI models by default, but you can turn this off in your account's Data Controls settings.

This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision is operationally significant because it means that conversational inputs, which may include personal, professional, or sensitive information, may be incorporated into AI model training unless the user actively disables the setting.

Recent Activity

This document changed recently

Medium May 5, 2026

The updated policy no longer explicitly states that OpenAI receives information from advertisers and other data partners for ad measurement and improvement, nor does it mention that users can control…

Medium May 1, 2026

The updated policy now explicitly authorizes OpenAI to promote products and services to users through direct marketing on third-party properties and to share limited information with select marketing…

Medium Apr 22, 2026

The updated policy removes explicit language describing how OpenAI shares personal data with marketing partners through cookies and similar technologies. The policy previously stated that 'some of th…

Consumer impact (what this means for users)

The policy states that conversation content submitted through consumer-tier products is used for model training unless the user opts out via the Data Controls toggle in account settings; API users are stated to be excluded from this default practice by default.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Opt Out of Arbitration
    Log into ChatGPT, navigate to Settings, select Data Controls, and toggle off 'Improve the model for everyone' to opt out of having your conversations used for model training.

How other platforms handle this

Ideogram Medium

We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.

ClickUp Medium

When you use AI features of the Services, you acknowledge that your inputs may be processed by third-party AI providers. ClickUp may use anonymized and aggregated data derived from your use of the Services to improve and train AI models and features.

Windsurf Medium

We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...

See all platforms with this clause type →

Monitoring

OpenAI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
We may use your content to train our models. For example, we use content to train our models. You can opt out of having your content used to train our models by following the instructions in the 'How to exercise your privacy rights' section or following the model training opt-out instructions in our Help Center.

— Excerpt from OpenAI's OpenAI Privacy Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: This provision engages the California Consumer Privacy Act and CPRA, which require businesses to provide opt-out rights for certain uses of personal data and may require disclosure of AI training use as a secondary processing purpose. Emerging US state privacy laws in Virginia, Colorado, Connecticut, and Texas also establish purpose limitation and secondary use consent requirements. The FTC Act Section 5 may apply where the opt-out mechanism's prominence or accessibility is assessed against disclosures made at point of data collection. The EU AI Act, while not addressed in this US policy, may apply to EU-resident users accessing these services. GOVERNANCE EXPOSURE: High. The default-on nature of model training using consumer-submitted content creates a material compliance obligation to ensure the opt-out mechanism is sufficiently accessible and functional. Failure of the opt-out mechanism to operate as described, or insufficient notice at point of collection, could create regulatory exposure under state consumer protection and privacy enforcement authorities. JURISDICTION FLAGS: California creates the highest exposure given the CPRA's expansive definition of sensitive personal information and the California Privacy Protection Agency's active rulemaking. Illinois, New York, and Texas residents may also have heightened protections depending on the nature of submitted content. For enterprise customers whose employees submit professional data through consumer-tier products, secondary liability exposure under sector-specific regulations (HIPAA, FERPA, financial regulations) may arise. CONTRACT AND VENDOR IMPLICATIONS: Enterprise procurement teams should verify whether their employees' use of consumer-tier ChatGPT products is governed by this consumer privacy policy or by a separate data processing agreement, as the model training default-on provision applies specifically to consumer products not covered by an API or enterprise agreement. Vendor assessment checklists should confirm the product tier and applicable data processing terms before allowing organizational data to be submitted. COMPLIANCE CONSIDERATIONS: Compliance teams should audit the model training opt-out mechanism for accessibility and confirm that user-facing consent flows adequately disclose this use at point of account creation. Data mapping documentation should classify conversation content as a distinct category of personal data with a training-use annotation. Teams operating in multi-state environments should assess whether a single toggle opt-out satisfies the varying 'clear and conspicuous' standards imposed by different state privacy laws.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has jurisdiction over unfair or deceptive data practices and may assess whether OpenAI's model training opt-out mechanism meets adequate disclosure and accessibility standards under Section 5 of the FTC Act.
    File a complaint →
  • State AG
    State attorneys general in California, Virginia, Colorado, Connecticut, and Texas have enforcement authority under their respective state privacy laws over secondary use of personal data for AI model training purposes.
    File a complaint →

Applicable regulations

EU AI Act
European Union
California AB 2013 AI Training Data Transparency
US-CA
Colorado AI Act
US-CO
EU AI Act - High Risk Provisions
EU
GDPR
European Union
Texas AI Act
Texas, USA
Trump Executive Order on AI Policy Framework
US
UK GDPR
United Kingdom

Provision details

Document information
Document
OpenAI Privacy Policy
Entity
OpenAI
Document last updated
May 5, 2026
Tracking information
First tracked
May 12, 2026
Last verified
May 12, 2026
Record ID
CA-P-011503
Document ID
CA-D-00010
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
e7d3ae1b9a38038435c94dab99b33a7d5dea6d69b6f8181c5120d571f048984f
Analysis generated
May 12, 2026 10:58 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: OpenAI
Document: OpenAI Privacy Policy
Record ID: CA-P-011503
Captured: 2026-05-12 10:58:58 UTC
SHA-256: e7d3ae1b9a380384…
URL: https://conductatlas.com/platform/openai/openai-privacy-policy/ai-model-training-use-of-conversation-content/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Related Analysis

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does OpenAI's AI Model Training Use of Conversation Content clause do?

This provision is operationally significant because it means that conversational inputs, which may include personal, professional, or sensitive information, may be incorporated into AI model training unless the user actively disables the setting.

How does this clause affect you?

The policy states that conversation content submitted through consumer-tier products is used for model training unless the user opts out via the Data Controls toggle in account settings; API users are stated to be excluded from this default practice by default.

Is ConductAtlas affiliated with OpenAI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.