OpenAI's enterprise privacy page indicates that data submitted through ChatGPT Enterprise and the API is not used by default to train OpenAI's AI models, distinguishing these tiers from the standard consumer ChatGPT product.
This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This distinction is material for businesses processing employee or customer data through OpenAI products, as it affects whether submitted inputs could be incorporated into future model outputs accessible to other users.
Interpretive note: The substantive policy text was not available in the provided HTML; this provision is inferred from the document's stated subject matter and OpenAI's publicly known enterprise privacy posture rather than verbatim clause language.
Businesses using ChatGPT Enterprise or the OpenAI API operate under data handling terms that the document states exclude their inputs from AI model training by default, providing a degree of data separation not present in the standard consumer ChatGPT tier.
How other platforms handle this
Writer does not use Customer Data to train its AI models without explicit customer permission. Customer Data means the data, content, and information that customers and their end users submit to or through the Services.
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
Data publicly available on the Internet. Our artificial intelligence models are trained on data that is publicly available on the Internet by third parties, which may contain personal data, even if we use good practices to filter out such personal data. [...] Training Datasets. In some cases, we acc...
Monitoring
OpenAI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
(1) REGULATORY LANDSCAPE: This provision engages GDPR Article 28 obligations where OpenAI acts as a data processor and the enterprise customer acts as a data controller; under GDPR, processors are restricted from using personal data for purposes beyond the controller's documented instructions, which would include AI training. CCPA similarly restricts service providers from using personal information for purposes other than providing services to the business. The FTC may scrutinize misrepresentations about data use under Section 5 of the FTC Act. (2) GOVERNANCE EXPOSURE: Medium. The provision is operationally significant for enterprise customers with data minimization obligations, but its enforceability depends on the specific language of a signed Data Processing Addendum rather than the marketing page alone. The absence of a verbatim clause in the provided document text introduces uncertainty. (3) JURISDICTION FLAGS: EU and EEA customers face heightened exposure under GDPR if personal data is processed without a compliant legal basis; the training opt-out directly affects whether OpenAI's processing remains within the scope of the controller's instructions. California customers should confirm whether API or enterprise agreements satisfy CCPA service provider contractual requirements. (4) CONTRACT AND VENDOR IMPLICATIONS: Procurement teams should verify that the training opt-out is documented in a signed DPA or order form rather than relying solely on the enterprise privacy page, which may not constitute a binding contractual commitment on its own. The DPA should specify the categories of personal data covered and the restrictions on subprocessor AI training use. (5) COMPLIANCE CONSIDERATIONS: Compliance teams should audit data flows to confirm that all personal data submitted via the API or ChatGPT Enterprise is governed by an executed DPA, and should maintain records of processing activities that reference the training exclusion as a documented safeguard.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This distinction is material for businesses processing employee or customer data through OpenAI products, as it affects whether submitted inputs could be incorporated into future model outputs accessible to other users.
Businesses using ChatGPT Enterprise or the OpenAI API operate under data handling terms that the document states exclude their inputs from AI model training by default, providing a degree of data separation not present in the standard consumer ChatGPT tier.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.