OpenAI can use what you type into ChatGPT to train its AI models. You have to actively go into settings and turn this off — it is not disabled by default.
Your private ChatGPT conversations, which may contain sensitive personal information, are used to train OpenAI's AI models unless you manually disable this in account settings. This creates a risk that sensitive disclosures you made in chat could influence future AI outputs.
Cross-platform context
See how other platforms handle AI Model Training on User Conversations and similar clauses.
Compare across platforms →Conversations with ChatGPT can include sensitive personal information — health questions, financial details, relationship issues — and using this content for model training without opt-in consent raises significant privacy risks.
REGULATORY FRAMEWORK: This provision implicates GDPR Art. 6(1)(f) (legitimate interests) and Art. 9 (special category data) — where conversation content includes health, political, religious, or biometric data, legitimate interests alone is an insufficient legal basis and explicit consent under Art. 9(2)(a) is required. The FTC Act Section 5 applies to representations about data use that may be unfair or deceptive. The EU AI Act (Regulation 2024/1689) imposes transparency obligations on training data for general-purpose AI models. The California Privacy Rights Act (CPRA §1798.121) requires opt-out for use of sensitive personal information beyond specified purposes.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.