OpenAI · GPT-4o System Card (PDF)

CSAM and Sexual Content Safeguards

High severity
Share 𝕏 Share in Share 🔒 PDF

What it is

GPT-4o has built-in blocks against generating child sexual abuse material, but operators running adult platforms can unlock explicit sexual content for their users — though not involving minors.

Consumer impact (what this means for users)

If you or your children use any app built on GPT-4o, the platform operator may have enabled explicit adult content — and OpenAI's CSAM block is the primary protection for minors, with no mention of robust age verification requirements for operators who unlock adult content.

Cross-platform context

See how other platforms handle CSAM and Sexual Content Safeguards and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

The architecture allowing operators to unlock adult content, combined with acknowledged limitations in age verification, creates a structural risk that explicit content may reach minors through third-party platforms.

View original clause language
We have implemented safeguards to prevent the generation of CSAM or detailed sexual content involving minors... We have classifiers to detect when users are attempting to use the model to generate CSAM and will refuse such requests. We also have safeguards in place to prevent the generation of explicit sexual content by default, though operators can unlock explicit sexual content generation for appropriate adult platforms.

Institutional analysis (Compliance & legal intelligence)

(1) REGULATORY FRAMEWORK: This provision implicates COPPA (15 U.S.C. §6501 et seq.) for deployments accessible to children under 13, the PROTECT Act (18 U.S.C. §2252A) for CSAM, the EU AI Act Article 5 (prohibited AI practices causing harm to minors), and California's Age-Appropriate Design Code (AB 2273). The FTC enforces COPPA; DOJ enforces PROTECT Act violations; the EU AI Office enforces AI Act prohibitions. (2)

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    FTC enforces COPPA (15 U.S.C. §6501) and has authority over platforms that fail to implement adequate protections for minors accessing AI-generated content.
    File a complaint →
  • State AG
    State Attorneys General enforce state child safety statutes and California's Age-Appropriate Design Code against platforms that expose minors to adult AI-generated content.
    File a complaint →

Provision details

Document information
Document
GPT-4o System Card (PDF)
Entity
OpenAI
Document last updated
March 5, 2026
Tracking information
First tracked
March 10, 2026
Last verified
April 27, 2026
Record ID
CA-P-003145
Document ID
CA-D-00008
Evidence Provenance
Source URL
Wayback Machine
SHA-256
7c23ef53467eea199596abe78511d57ffee1e94b50ef10ac0f7d81df278b5059
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: OpenAI | Document: GPT-4o System Card (PDF) | Record: CA-P-003145
Captured: 2026-03-10 03:40:55 UTC | SHA-256: 7c23ef53467eea19…
URL: https://conductatlas.com/platform/openai/gpt-4o-system-card-pdf/csam-and-sexual-content-safeguards/
Accessed: May 2, 2026
Classification
Severity
High
Categories

Other provisions in this document