Track 1 platform and get the weekly governance digest. No credit card required.
This page describes what the document states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability may vary by jurisdiction. Methodology
This is OpenAI's Acceptable Use Policy, which sets the rules for what you can and cannot do with ChatGPT, Sora, Codex, and OpenAI's other AI tools. The policy explicitly prohibits using these services to generate child sexual abuse material, create weapons of mass destruction, build election disinformation campaigns, develop malware or cyberweapons, or operate illegal surveillance systems. If you use OpenAI's API to build products, you are personally responsible for making sure your users also follow these rules.
This document is OpenAI's Usage Policy (Acceptable Use Policy), which governs permissible and prohibited uses of OpenAI's models, APIs, products, and services across all platforms including ChatGPT, Sora, Codex, and the developer API. The agreement states that users and operators must not use OpenAI's services for a defined set of prohibited activities including generating content that sexualizes minors, facilitating weapons capable of mass casualties, creating cyberweapons, generating disinformation to undermine elections, and building surveillance tools that violate civil rights. The policy establishes a tiered operator-user framework in which API operators bear responsibility for ensuring downstream user compliance, and the terms authorize OpenAI to take enforcement action including suspension or termination for violations, with an appeals process referenced for affected users. The document engages with regulatory frameworks governing AI systems, child safety, cybersecurity, and content moderation, including the EU AI Act, COPPA, the FTC Act, and emerging national AI governance requirements, though the applicability of specific obligations depends on the jurisdiction and whether a deployment qualifies as a high-risk AI use case. Compliance teams should note that the policy places affirmative due diligence obligations on API operators to restrict prohibited uses by their end users, which may require contract amendments, user consent audits, and content moderation infrastructure reviews.
Institutional analysis available with Professional
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Start Professional free trialMonitoring
OpenAI has updated this document before.
Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
Professional Governance Intelligence
Need provision-level monitoring and regulatory mapping?
Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.
Start Professional free trialCross-platform context
See how other platforms handle Absolute Prohibition on Child Sexual Abuse Material and similar clauses.
Compare across platforms →OpenAI expanded its data sharing terms to include third-party marketing partners. The updated policy authorizes the use of personal data fo…
Governance Monitoring
Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.