8 Total
4 High severity
4 Medium severity
0 Low severity
Summary

This is OpenAI's privacy policy explaining how they collect and use your personal data when you use ChatGPT, the API, and other OpenAI products. The most important thing to know is that your conversations with ChatGPT may be used to train OpenAI's AI models unless you actively opt out through your account settings. You can turn off model training in your ChatGPT account under Settings > Data Controls > Improve the model for everyone.

Technical Summary

This document is OpenAI's global Privacy Policy (updated February 6, 2026), governing the collection, use, and disclosure of personal data by OpenAI, L.L.C. for users outside the EEA, UK, Switzerland, and the United States, with legal bases including contractual necessity, legitimate interests, and consent depending on jurisdiction. The policy creates obligations for OpenAI to provide data subject rights (access, deletion, correction, portability, opt-out of training) and imposes obligations on users to provide accurate information, while authorizing broad data collection including conversation content, device identifiers, location data, and third-party sourced personal information used to train AI models. A notable deviation from industry standard is OpenAI's explicit disclosure that user-submitted content — including conversations with ChatGPT — may be used to train AI models, with opt-out available but not default; additionally, OpenAI discloses sharing personal data with affiliates, vendors, law enforcement, and business transaction counterparties in broad terms. The policy engages GDPR (with a separate EEA policy), CCPA/CPRA for California residents, and various U.S. state privacy laws (Colorado, Connecticut, Virginia, Texas, Oregon), with material compliance considerations including the adequacy of consent mechanisms for AI training data use and the sufficiency of opt-out mechanisms under state law.

Evidence Provenance
Captured March 10, 2026 03:33 UTC
Document ID CA-D-000006
Version ID CA-V-000070
Wayback Machine View archived versions →
SHA-256 3b160fe944be24fac66984713a224734d9c562d07559a5fc517f7f1fb9dff79d
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Cryptographically signed
Institutional Analysis

🔒 Institutional analysis locked

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Upgrade to Professional — $149/mo
Change Timeline
View full version history (0 captures) →
High Severity — 4 provisions
Medium Severity — 4 provisions

Cross-platform context

See how other platforms handle AI Model Training on Conversation Data and similar clauses.

Compare across platforms →

Applicable Regulations

EU AI Act
European Union
BIPA
Illinois, USA
CCPA/CPRA
California, USA
CFAA
United States Federal
CAN-SPAM
United States Federal
DMCA
United States Federal
DSA
European Union
GDPR
European Union
UK GDPR
United Kingdom