OpenAI absolutely bans using its tools to produce child sexual abuse material, weapons of mass destruction instructions, cyberweapons, or content designed to facilitate real violence against specific people. These rules cannot be unlocked by any operator or user under any circumstances.
If you attempt to use ChatGPT or any OpenAI-powered tool to generate content in these categories, your account will be subject to immediate enforcement action and potential referral to law enforcement, including for CSAM-related violations which carry mandatory reporting obligations under federal law.
Cross-platform context
See how other platforms handle Absolute Prohibitions (Hardcoded Behaviors) and similar clauses.
Compare across platforms →These are the hardest legal and ethical lines in the policy — violations could expose both the user and OpenAI to federal criminal liability, and OpenAI treats these as non-negotiable regardless of stated purpose or context.
REGULATORY FRAMEWORK: These prohibitions directly implicate 18 U.S.C. § 2256 et seq. (federal CSAM statutes, with mandatory NCMEC reporting under 18 U.S.C. § 2258A), the Chemical Weapons Convention Implementation Act (22 U.S.C. § 6701), the Biological Weapons Anti-Terrorism Act (18 U.S.C. § 175), and the Computer Fraud and Abuse Act (18 U.S.C. § 1030) for cyberweapon generation. Enforcement authorities include the DOJ, FBI Cyber Division, and NCMEC. The EU AI Act Article 5 explicitly prohibits AI systems that enable manipulation leading to physical harm, overlapping with several of these categories.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.