OpenAI may use what you type into ChatGPT and other services to improve its AI models, but you can turn this off in your account settings or through OpenAI's Privacy Portal.
This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This is an opt-out rather than opt-in default, meaning your conversation content is used for model training unless you take active steps to stop it, which many users may not be aware of.
The updated policy no longer explicitly states that OpenAI receives information from advertisers and other data partners for ad measurement and improvement, nor does it mention that users can control…
The updated policy now explicitly authorizes OpenAI to promote products and services to users through direct marketing on third-party properties and to share limited information with select marketing…
The updated policy removes explicit language describing how OpenAI shares personal data with marketing partners through cookies and similar technologies. The policy previously stated that 'some of th…
If you do not opt out, the content of your conversations including questions, uploaded files, and personal information you share in prompts may be used to train OpenAI's AI models and influence future model outputs.
How other platforms handle this
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
After registration, you may create, upload or transmit files, documents, videos, images, data or information as part of your use of the Service (collectively, "User Content"). This includes any inputs you provide to our AI-powered support tools and outputs generated in response to your inputs. User ...
Monitoring
OpenAI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We may use your Content to train our models. You can opt out of your Content being used to train our models by following the instructions in the 'How to opt out of model training' section or by visiting our Privacy Portal. Please note that in some cases, this may limit the ability of our Services to better address your specific use case.— Excerpt from OpenAI's OpenAI Privacy Policy
(1) REGULATORY LANDSCAPE: This provision implicates GDPR Article 6 (lawful basis) and Article 9 (special category data) for EEA and UK users, as user-submitted content may include health, financial, or other sensitive information. The FTC Act Section 5 is relevant for US users regarding whether the opt-out default constitutes an unfair or deceptive practice. The EU AI Act may also engage this provision regarding transparency obligations for AI system training data. (2) GOVERNANCE EXPOSURE: High. The default opt-in to model training creates meaningful exposure under GDPR's purpose limitation and data minimization principles. If the asserted legal basis is legitimate interests, a documented balancing test is required; if consent is relied upon, pre-ticked defaults may be insufficient. The open-ended nature of 'Content' — which could include sensitive personal data shared incidentally — amplifies this risk. (3) JURISDICTION FLAGS: EEA and UK users face the highest exposure given GDPR requirements for a clear and valid legal basis. California users may have CCPA rights to know and delete data used in training. Illinois users should consider whether voice or biometric data submitted could implicate BIPA. Minors' data warrants particular scrutiny given COPPA and analogous state laws. (4) CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers integrating OpenAI via API should confirm whether their API data processing agreement excludes model training use, as the policy indicates API data may be treated differently. B2B contracts that route customer or employee data through OpenAI's services should include explicit data processing addenda clarifying training data use. (5) COMPLIANCE CONSIDERATIONS: Compliance teams should audit whether existing privacy notices disclosed to end users accurately represent the possibility of AI model training; update data mapping records to include OpenAI as a processor or sub-processor; and review whether employee acceptable use policies address submission of confidential data to ChatGPT.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This is an opt-out rather than opt-in default, meaning your conversation content is used for model training unless you take active steps to stop it, which many users may not be aware of.
If you do not opt out, the content of your conversations including questions, uploaded files, and personal information you share in prompts may be used to train OpenAI's AI models and influence future model outputs.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.