This is OpenAI's US privacy policy explaining how it collects and uses your data when you use ChatGPT and other OpenAI products. The most important thing to know is that by default, your conversations with ChatGPT — including everything you type and every response generated — may be used to train OpenAI's AI models. You can turn off this training use of your data by going to ChatGPT Settings > Data Controls and disabling 'Improve the model for everyone.'
OpenAI's US Privacy Policy governs the collection, use, and disclosure of personal information by OpenAI, L.L.C. in connection with its services including ChatGPT, the API, and related products, relying on legal bases including consent, contractual necessity, legitimate interests, and compliance with legal obligations under applicable US state privacy laws. The policy creates significant obligations including granting users rights to access, correct, delete, and opt out of certain data processing, while obligating OpenAI to disclose categories of personal data collected, purposes of collection, and categories of third-party recipients. A notable deviation from standard practice is OpenAI's explicit acknowledgment that it collects conversation content — including user prompts and AI-generated outputs — and may use this content to train its AI models, with opt-out available only through a settings toggle rather than being opt-in by default, creating asymmetric consent architecture. The policy engages CCPA/CPRA (Cal. Civ. Code §1798.100 et seq.), various US state privacy laws including Virginia VCDPA, Colorado CPA, Connecticut CTDPA, and Texas TDPSA, with enforcement by the California Attorney General, state AGs, and the FTC under Section 5 of the FTC Act. Material compliance considerations include the breadth of sensitive data categories collected (including health-related information, location data, and biometric-adjacent voice and image data), the use of conversation data for AI model training, and the engagement of numerous third-party advertising and analytics vendors including Meta, Google, LinkedIn, Reddit, and Bing whose tracking scripts are embedded in the policy page itself.
🔒 Institutional analysis locked
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Upgrade to Professional — $149/mo2 changes analyzed since monitoring began.
This addition reveals new third-party tracking infrastructure and cross-site behavioral monitoring not previously disclosed, representing material expansion of data collection methods.
This new provision explicitly acknowledges collection of sensitive health and financial data, which creates heightened privacy obligations and represents a significant expansion of permissible data categories.
This addition explicitly details third-party sharing categories and law enforcement disclosure procedures, replacing the previous vague reference and providing transparency on data flow to external parties.
This new provision adds explicit CPRA compliance language including sensitive data limitation rights, representing alignment with 2023 California privacy law amendments not present in previous version.
While replaced by more detailed disclosure provisions, the previous high-severity framing of affiliate sharing has been deprioritized in the current version.
Removal of explicit cross-border transfer provision eliminates transparency on international data movement mechanisms (e.g., standard contractual clauses, adequacy decisions).
Absence of dedicated GDPR provision in current version potentially indicates either consolidation into general user rights or reduced emphasis on EU-specific compliance.
Previous version had no excerpt data, but current version now explicitly states opt-out mechanism is available in account settings rather than implicit or unstated.
Previous version had no excerpt, current version now consolidates deletion, export, and portability rights into single provision with explicit reference to Privacy Portal URL.
Previous version had no excerpt data, current version now adds explicit requirement for parental consent for users aged 13-18 and provides contact mechanism for child data concerns.
Previous version had no excerpt, current version now explicitly details deletion/anonymization commitment and expanded retention purpose scope.
Cross-platform context
See how other platforms handle AI Model Training Data Use and similar clauses.
Compare across platforms →