Track 1 platform and get the weekly governance digest. No credit card required.
This page describes what the document states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability may vary by jurisdiction. Methodology
This is OpenAI's rulebook for what you are and are not allowed to do with ChatGPT, the API, and all other OpenAI products. The most significant aspect for everyday users is that OpenAI prohibits a specific list of harmful uses — including generating content that sexualizes minors, helping create weapons capable of mass casualties, and building tools to conduct cyberattacks — and reserves the right to suspend or terminate access if these rules are violated. If you are a developer or business building on OpenAI's API, you are also responsible for making sure your customers follow these rules.
This document is OpenAI's Usage Policy (Acceptable Use Policy), governing permissible and prohibited uses of OpenAI's models, tools, APIs, and products, with its authority grounded in OpenAI's Terms of Use. The policy asserts that all users and operators must comply with a defined set of prohibited use categories and that operators building on the API are responsible for ensuring their end users also comply, effectively creating a two-tier compliance obligation. The policy enumerates absolute prohibitions — including generation of child sexual abuse material, creation of cyberweapons, development of weapons of mass destruction, and content facilitating real-world violence — alongside conditional restrictions where context, safeguards, and operator permissions can modify what is permissible, a structure that places significant interpretive and enforcement discretion with OpenAI. The policy engages with multiple regulatory frameworks including the EU AI Act (which classifies certain AI uses as prohibited or high-risk), COPPA and child safety statutes, computer fraud and cybercrime laws across jurisdictions, export control regimes (ITAR, EAR), and platform liability frameworks such as Section 230 of the CDA; the policy's operator responsibility provisions may create compliance surface area under each of these depending on use case and jurisdiction. Compliance teams deploying OpenAI via API should note that the operator tier carries downstream liability exposure for end-user violations, that certain permitted-by-default behaviors can be unlocked only by operators who meet unspecified eligibility criteria, and that OpenAI reserves unilateral authority to update usage policies without specifying notice obligations or effective-date timelines.
Institutional analysis available with Professional
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Start Professional free trialMonitoring
OpenAI has updated this document before.
Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
Professional Governance Intelligence
Need provision-level monitoring and regulatory mapping?
Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.
Start Professional free trialCross-platform context
See how other platforms handle Absolute Prohibition on Child Sexual Abuse Material and similar clauses.
Compare across platforms →OpenAI expanded its data sharing terms to include third-party marketing partners. The updated policy authorizes the use of personal data fo…
Governance Monitoring
Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.