OpenAI · Usage Policies · View original document ↗

Prohibition on Child Sexual Abuse Material

High severity Medium confidence Inferredfromcontext Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Recent governance activity OpenAI recorded 7 documented changes in the last 30 days.
Start monitoring updates
Monitor governance changes for OpenAI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

The policy prohibits using any OpenAI service to generate sexual content involving minors, including imagery, text, or any other format that sexualizes individuals under 18.

This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This prohibition applies to all users and operators without exception and represents an absolute restriction with no operator override capability, meaning no business or personal use case authorizes this content.

Interpretive note: The exact verbatim text could not be extracted from the PDF binary. The provision's existence and scope is inferred from the document metadata, linked URLs referencing OpenAI's CSAM policy, and publicly available versions of OpenAI's Usage Policy consistent with this document.

Consumer impact (what this means for users)

Any user or developer who generates or facilitates the generation of child sexual abuse material using OpenAI services is in direct violation of this policy and subject to immediate account termination, as well as potential criminal referral under applicable law.

How other platforms handle this

Amazon Medium

You may not use the Services to: violate the security or integrity of any network, computer or communications system, software application, or network or computing device; access or use any system without permission, including attempting to probe, scan, or test the vulnerability of a system or to br...

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

Mistral AI Medium

Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...

See all platforms with this clause type →

Monitoring

OpenAI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

1. REGULATORY LANDSCAPE: This provision directly intersects with COPPA, 18 U.S.C. Section 2256 (federal child pornography statutes), the PROTECT Our Children Act, and equivalent EU Directive 2011/93/EU on combating sexual abuse and exploitation of children. The National Center for Missing and Exploited Children (NCMEC) and the FBI's Crimes Against Children unit are relevant enforcement authorities. Technology platforms have mandatory reporting obligations under 18 U.S.C. Section 2258A when they become aware of child sexual abuse material on their services. 2. GOVERNANCE EXPOSURE: High. This prohibition creates a zero-tolerance compliance obligation for all API operators. Any operator deploying a product that could generate such content without adequate safeguards faces not only contract termination but potential criminal liability and mandatory reporting obligations. The absence of a carve-out or operator override confirms this is an absolute restriction. 3. JURISDICTION FLAGS: This obligation applies globally. All major jurisdictions criminalize child sexual abuse material. EU operators face additional obligations under the proposed EU Child Sexual Abuse Regulation (CSAR). US operators have federal mandatory reporting obligations regardless of state law. 4. CONTRACT AND VENDOR IMPLICATIONS: API operators must ensure their content moderation infrastructure specifically filters for CSAM generation attempts. Procurement teams evaluating OpenAI integrations should confirm that safety filters are active and cannot be disabled by operators. Vendor assessments should include review of OpenAI's model safety documentation referenced at https://openai.com/safety/. 5. COMPLIANCE CONSIDERATIONS: Compliance teams should ensure incident response plans include a NCMEC CyberTipline reporting procedure. Content moderation audits should specifically test for CSAM generation attempts across all model deployments. Developer agreements should explicitly replicate this prohibition in downstream user terms.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC oversees consumer protection and unfair or deceptive practices related to technology platforms, including enforcement related to child safety obligations
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Usage Policies
Entity
OpenAI
Document last updated
March 5, 2026
Tracking information
First tracked
March 10, 2026
Last verified
May 12, 2026
Record ID
CA-P-011457
Document ID
CA-D-00005
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
d69a24617758e5b44e4be8eedeceb598a26dc4e280f2ab1469a45b64203e7403
Analysis generated
March 10, 2026 03:28 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: OpenAI
Document: Usage Policies
Record ID: CA-P-011457
Captured: 2026-03-10 03:28:59 UTC
SHA-256: d69a24617758e5b4…
URL: https://conductatlas.com/platform/openai/usage-policies/prohibition-on-child-sexual-abuse-material/
Accessed: May 15, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does OpenAI's Prohibition on Child Sexual Abuse Material clause do?

This prohibition applies to all users and operators without exception and represents an absolute restriction with no operator override capability, meaning no business or personal use case authorizes this content.

How does this clause affect you?

Any user or developer who generates or facilitates the generation of child sexual abuse material using OpenAI services is in direct violation of this policy and subject to immediate account termination, as well as potential criminal referral under applicable law.

Is ConductAtlas affiliated with OpenAI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.