OpenAI · Privacy Policy (ROW)

AI Model Training on User Conversations

High severity
Share 𝕏 Share in Share 🔒 PDF

What it is

OpenAI can use what you type into ChatGPT to train its AI models. You have to actively go into settings and turn this off — it is not disabled by default.

Consumer impact (what this means for users)

Your private ChatGPT conversations, which may contain sensitive personal information, are used to train OpenAI's AI models unless you manually disable this in account settings. This creates a risk that sensitive disclosures you made in chat could influence future AI outputs.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Opt Out of Arbitration
    Log into ChatGPT, click your profile icon, go to Settings, select Data Controls, and toggle off 'Improve the model for everyone.' This disables use of your conversations for model training.

Cross-platform context

See how other platforms handle AI Model Training on User Conversations and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

Conversations with ChatGPT can include sensitive personal information — health questions, financial details, relationship issues — and using this content for model training without opt-in consent raises significant privacy risks.

View original clause language
We may use your personal data to train and improve our models and services. When you use our Services, your conversations with our models may be used to train and improve our Services, unless you opt out. You can turn this off by going to your account Settings and turning off 'Improve the model for everyone.'

Institutional analysis (Compliance & legal intelligence)

REGULATORY FRAMEWORK: This provision implicates GDPR Art. 6(1)(f) (legitimate interests) and Art. 9 (special category data) — where conversation content includes health, political, religious, or biometric data, legitimate interests alone is an insufficient legal basis and explicit consent under Art. 9(2)(a) is required. The FTC Act Section 5 applies to representations about data use that may be unfair or deceptive. The EU AI Act (Regulation 2024/1689) imposes transparency obligations on training data for general-purpose AI models. The California Privacy Rights Act (CPRA §1798.121) requires opt-out for use of sensitive personal information beyond specified purposes.

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    The FTC has enforcement authority over deceptive or unfair data practices under FTC Act Section 5, including use of personal data for AI training without adequate disclosure.
    File a complaint →

Provision details

Document information
Document
Privacy Policy (ROW)
Entity
OpenAI
Document last updated
March 5, 2026
Tracking information
First tracked
March 10, 2026
Last verified
April 27, 2026
Record ID
CA-P-003131
Document ID
CA-D-00006
Evidence Provenance
Source URL
Wayback Machine
SHA-256
f3c083059dff1a3f26f2ce10f0072ca60f38c6921517ae6dd07e528e4bfc7ce2
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: OpenAI | Document: Privacy Policy (ROW) | Record: CA-P-003131
Captured: 2026-03-10 03:38:17 UTC | SHA-256: f3c083059dff1a3f…
URL: https://conductatlas.com/platform/openai/privacy-policy-row/ai-model-training-on-user-conversations/
Accessed: May 2, 2026
Classification
Severity
High
Categories

Other provisions in this document