OpenAI states that data you submit through the API, ChatGPT Enterprise, or ChatGPT Team products is not used to train its AI models unless you separately opt in.
This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision directly addresses one of the most common concerns for enterprise customers: whether their proprietary data, client information, or confidential inputs could be incorporated into OpenAI's model training. The document states this does not happen by default for these specific product tiers.
Interpretive note: The commitment is stated on a marketing/disclosure page rather than in a formally executed contractual instrument, and enforceability depends on the operative Terms of Service and any executed DPA.
Enterprise and API customers' inputs and outputs are stated to be excluded from model training by default, which may be material for organizations submitting confidential, proprietary, or regulated data through these products.
Cross-platform context
See how other platforms handle No Training on Enterprise and API Data by Default and similar clauses.
Compare across platforms →Monitoring
OpenAI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We do not train on your business data by default. Inputs and outputs through the API and in ChatGPT Enterprise and ChatGPT Team are not used to train our models by default.— Excerpt from OpenAI's OpenAI Enterprise Privacy
REGULATORY LANDSCAPE: This provision engages GDPR principles of purpose limitation and data minimization, as well as CCPA provisions regarding use of personal information beyond the disclosed purpose. The FTC's authority over unfair and deceptive practices is relevant if the stated commitment is not operationally implemented as described. No specific GDPR article is cited in the document, but the commitment aligns with Article 5 purpose limitation principles. GOVERNANCE EXPOSURE: Medium. The commitment is stated clearly but its enforceability depends on whether it is reflected in an executed DPA or Terms of Service rather than a marketing disclosure page alone. If a customer has not executed a DPA, the operative Terms of Service govern, and compliance teams should verify consistency between the two instruments. JURISDICTION FLAGS: EU/EEA customers have heightened exposure because GDPR imposes binding obligations on data processors, and a marketing-page commitment may not satisfy the Article 28 written processor agreement requirement. California customers may evaluate this commitment against CCPA service provider restrictions on secondary use of personal information. CONTRACT AND VENDOR IMPLICATIONS: Procurement teams should confirm this commitment is reflected verbatim or by reference in the executed DPA or enterprise agreement, as a webpage disclosure is not a contractual guarantee. Vendor assessments should include verification of the technical mechanisms by which training exclusion is implemented. COMPLIANCE CONSIDERATIONS: Compliance teams should map this commitment against their data processing records, update vendor assessments to reflect the no-training commitment, and confirm that any sub-processors OpenAI uses are subject to equivalent restrictions. Where the DPA is not yet executed, this should be treated as a due diligence gap.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision directly addresses one of the most common concerns for enterprise customers: whether their proprietary data, client information, or confidential inputs could be incorporated into OpenAI's model training. The document states this does not happen by default for these specific product tiers.
Enterprise and API customers' inputs and outputs are stated to be excluded from model training by default, which may be material for organizations submitting confidential, proprietary, or regulated data through these products.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.