OpenAI states that content you submit through consumer products like ChatGPT may be used to train its AI models by default, but you can turn this off in your account's Data Controls settings.
This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision is operationally significant because it means that conversational inputs, which may include personal, professional, or sensitive information, may be incorporated into AI model training unless the user actively disables the setting.
The updated policy no longer explicitly states that OpenAI receives information from advertisers and other data partners for ad measurement and improvement, nor does it mention that users can control…
The updated policy now explicitly authorizes OpenAI to promote products and services to users through direct marketing on third-party properties and to share limited information with select marketing…
The updated policy removes explicit language describing how OpenAI shares personal data with marketing partners through cookies and similar technologies. The policy previously stated that 'some of th…
The policy states that conversation content submitted through consumer-tier products is used for model training unless the user opts out via the Data Controls toggle in account settings; API users are stated to be excluded from this default practice by default.
How other platforms handle this
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
When you use AI features of the Services, you acknowledge that your inputs may be processed by third-party AI providers. ClickUp may use anonymized and aggregated data derived from your use of the Services to improve and train AI models and features.
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
Monitoring
OpenAI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We may use your content to train our models. For example, we use content to train our models. You can opt out of having your content used to train our models by following the instructions in the 'How to exercise your privacy rights' section or following the model training opt-out instructions in our Help Center.— Excerpt from OpenAI's OpenAI Privacy Policy
REGULATORY LANDSCAPE: This provision engages the California Consumer Privacy Act and CPRA, which require businesses to provide opt-out rights for certain uses of personal data and may require disclosure of AI training use as a secondary processing purpose. Emerging US state privacy laws in Virginia, Colorado, Connecticut, and Texas also establish purpose limitation and secondary use consent requirements. The FTC Act Section 5 may apply where the opt-out mechanism's prominence or accessibility is assessed against disclosures made at point of data collection. The EU AI Act, while not addressed in this US policy, may apply to EU-resident users accessing these services. GOVERNANCE EXPOSURE: High. The default-on nature of model training using consumer-submitted content creates a material compliance obligation to ensure the opt-out mechanism is sufficiently accessible and functional. Failure of the opt-out mechanism to operate as described, or insufficient notice at point of collection, could create regulatory exposure under state consumer protection and privacy enforcement authorities. JURISDICTION FLAGS: California creates the highest exposure given the CPRA's expansive definition of sensitive personal information and the California Privacy Protection Agency's active rulemaking. Illinois, New York, and Texas residents may also have heightened protections depending on the nature of submitted content. For enterprise customers whose employees submit professional data through consumer-tier products, secondary liability exposure under sector-specific regulations (HIPAA, FERPA, financial regulations) may arise. CONTRACT AND VENDOR IMPLICATIONS: Enterprise procurement teams should verify whether their employees' use of consumer-tier ChatGPT products is governed by this consumer privacy policy or by a separate data processing agreement, as the model training default-on provision applies specifically to consumer products not covered by an API or enterprise agreement. Vendor assessment checklists should confirm the product tier and applicable data processing terms before allowing organizational data to be submitted. COMPLIANCE CONSIDERATIONS: Compliance teams should audit the model training opt-out mechanism for accessibility and confirm that user-facing consent flows adequately disclose this use at point of account creation. Data mapping documentation should classify conversation content as a distinct category of personal data with a training-use annotation. Teams operating in multi-state environments should assess whether a single toggle opt-out satisfies the varying 'clear and conspicuous' standards imposed by different state privacy laws.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision is operationally significant because it means that conversational inputs, which may include personal, professional, or sensitive information, may be incorporated into AI model training unless the user actively disables the setting.
The policy states that conversation content submitted through consumer-tier products is used for model training unless the user opts out via the Data Controls toggle in account settings; API users are stated to be excluded from this default practice by default.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.