OpenAI · Usage Policies

Absolute Prohibitions (Hardcoded Behaviors)

High severity
Share 𝕏 Share in Share

What it is

OpenAI absolutely bans using its tools to produce child sexual abuse material, weapons of mass destruction instructions, cyberweapons, or content designed to facilitate real violence against specific people. These rules cannot be unlocked by any operator or user under any circumstances.

Consumer impact (what this means for users)

If you attempt to use ChatGPT or any OpenAI-powered tool to generate content in these categories, your account will be subject to immediate enforcement action and potential referral to law enforcement, including for CSAM-related violations which carry mandatory reporting obligations under federal law.

Cross-platform context

See how other platforms handle Absolute Prohibitions (Hardcoded Behaviors) and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

These are the hardest legal and ethical lines in the policy — violations could expose both the user and OpenAI to federal criminal liability, and OpenAI treats these as non-negotiable regardless of stated purpose or context.

View original clause language
Don't use our services to create content that sexualizes minors or that could be used to facilitate real-world harm to minors. Don't use our services to generate content designed to facilitate actual (not fictional) violence against specific real people or detailed instructions for violence. Don't use our services to create cyberweapons or malicious code that could cause significant damage if deployed. Don't use our services to create biological, chemical, nuclear, or radiological weapons with the potential for mass casualties.

Institutional analysis (Compliance & legal intelligence)

REGULATORY FRAMEWORK: These prohibitions directly implicate 18 U.S.C. § 2256 et seq. (federal CSAM statutes, with mandatory NCMEC reporting under 18 U.S.C. § 2258A), the Chemical Weapons Convention Implementation Act (22 U.S.C. § 6701), the Biological Weapons Anti-Terrorism Act (18 U.S.C. § 175), and the Computer Fraud and Abuse Act (18 U.S.C. § 1030) for cyberweapon generation. Enforcement authorities include the DOJ, FBI Cyber Division, and NCMEC. The EU AI Act Article 5 explicitly prohibits AI systems that enable manipulation leading to physical harm, overlapping with several of these categories.

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    FTC has enforcement authority over AI platforms that fail to implement adequate safeguards against harmful outputs under FTC Act Section 5 unfair practices authority.
    File a complaint →

Provision details

Document information
Document
Usage Policies
Entity
OpenAI
Document last updated
March 5, 2026
Tracking information
First tracked
March 10, 2026
Last verified
April 27, 2026
Record ID
CA-P-003124
Document ID
CA-D-00005
Evidence Provenance
Source URL
Wayback Machine
SHA-256
d69a24617758e5b44e4be8eedeceb598a26dc4e280f2ab1469a45b64203e7403
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: OpenAI | Document: Usage Policies | Record: CA-P-003124
Captured: 2026-03-10 03:28:59 UTC | SHA-256: d69a24617758e5b4…
URL: https://conductatlas.com/platform/openai/usage-policies/absolute-prohibitions-hardcoded-behaviors/
Accessed: April 28, 2026
Classification
Severity
High
Categories

Other provisions in this document