OpenAI · OpenAI Usage Policies · View original document ↗

Absolute Prohibition on Child Sexual Abuse Material

High severity High confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Recent governance activity OpenAI recorded 5 documented changes in the last 30 days.
Start monitoring updates
Monitor governance changes for OpenAI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

OpenAI absolutely prohibits using any of its tools or models to generate sexual content involving minors, with no exceptions regardless of context, operator permissions, or stated purpose.

This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This is a hard, unconditional restriction that applies to every user and operator without exception; violation would constitute both a policy breach and potentially criminal conduct under laws in most jurisdictions.

Consumer impact (what this means for users)

Any user or developer who generates or attempts to generate child sexual abuse material using OpenAI products will be in violation of this policy and subject to account termination, and the conduct may be reportable to law enforcement under applicable mandatory reporting obligations.

How other platforms handle this

Amazon Medium

You may not use the Services to: violate the security or integrity of any network, computer or communications system, software application, or network or computing device; access or use any system without permission, including attempting to probe, scan, or test the vulnerability of a system or to br...

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

Mistral AI Medium

Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...

See all platforms with this clause type →

Monitoring

OpenAI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Generate CSAM or detailed sexual content involving minors

— Excerpt from OpenAI's OpenAI Usage Policies

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: This provision directly engages with the PROTECT Act (18 U.S.C. § 2256 and related sections) in the United States, the EU's Directive on combating sexual abuse and exploitation of children, and equivalent criminal statutes in virtually all jurisdictions. The National Center for Missing and Exploited Children (NCMEC) operates the CyberTipline, and electronic service providers may have mandatory reporting obligations under 18 U.S.C. § 2258A. The FTC and state attorneys general also have consumer protection authority over platforms that fail to implement adequate safeguards. (2) GOVERNANCE EXPOSURE: High. This is among the most legally and reputationally significant prohibitions in the document. Failure to enforce this restriction — whether through technical controls, content moderation, or operator oversight — could expose OpenAI and downstream operators to criminal referral, civil liability, and regulatory enforcement action. (3) JURISDICTION FLAGS: This obligation applies globally. All major jurisdictions criminalize CSAM generation and distribution. EU operators face obligations under the proposed EU CSAM Regulation in addition to existing national criminal law. UK operators must comply with the Online Safety Act's provisions on illegal content. (4) CONTRACT AND VENDOR IMPLICATIONS: API operators must ensure their own terms of service and technical controls prohibit this use. Procurement teams should verify that any downstream integration of OpenAI models includes explicit contractual prohibitions on CSAM-related use and appropriate content filtering. This prohibition should be reflected in vendor risk assessments. (5) COMPLIANCE CONSIDERATIONS: Operators should implement content filtering and detection mechanisms, establish incident response procedures for potential CSAM detection, document their compliance posture, and assess whether their platforms trigger mandatory NCMEC reporting obligations under federal law. Legal teams should confirm that employment and contractor policies address reporting obligations.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has consumer protection authority over platforms that fail to implement adequate safeguards against illegal content generation, including failure to enforce stated policies.
    File a complaint →
  • State AG
    State attorneys general have criminal and consumer protection enforcement authority over CSAM-related conduct and platform failures to enforce stated prohibitions.
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
OpenAI Usage Policies
Entity
OpenAI
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 12, 2026
Record ID
CA-P-011722
Document ID
CA-D-00753
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
7bc76af79d3d7702e7ce284199b0b15a9dc7dd89f62958bd0823240c00eaab06
Analysis generated
May 11, 2026 12:43 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: OpenAI
Document: OpenAI Usage Policies
Record ID: CA-P-011722
Captured: 2026-05-11 12:43:28 UTC
SHA-256: 7bc76af79d3d7702…
URL: https://conductatlas.com/platform/openai/openai-usage-policies/absolute-prohibition-on-child-sexual-abuse-material/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does OpenAI's Absolute Prohibition on Child Sexual Abuse Material clause do?

This is a hard, unconditional restriction that applies to every user and operator without exception; violation would constitute both a policy breach and potentially criminal conduct under laws in most jurisdictions.

How does this clause affect you?

Any user or developer who generates or attempts to generate child sexual abuse material using OpenAI products will be in violation of this policy and subject to account termination, and the conduct may be reportable to law enforcement under applicable mandatory reporting obligations.

Is ConductAtlas affiliated with OpenAI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.