8 Total
4 High severity
3 Medium severity
1 Low severity
Summary

This is OpenAI's Acceptable Use Policy, which sets the rules for what you can and cannot do with ChatGPT, Sora, Codex, and OpenAI's other AI tools. The policy explicitly prohibits using these services to generate child sexual abuse material, create weapons of mass destruction, build election disinformation campaigns, develop malware or cyberweapons, or operate illegal surveillance systems. If you use OpenAI's API to build products, you are personally responsible for making sure your users also follow these rules.

Technical / Legal Breakdown

This document is OpenAI's Usage Policy (Acceptable Use Policy), which governs permissible and prohibited uses of OpenAI's models, APIs, products, and services across all platforms including ChatGPT, Sora, Codex, and the developer API. The agreement states that users and operators must not use OpenAI's services for a defined set of prohibited activities including generating content that sexualizes minors, facilitating weapons capable of mass casualties, creating cyberweapons, generating disinformation to undermine elections, and building surveillance tools that violate civil rights. The policy establishes a tiered operator-user framework in which API operators bear responsibility for ensuring downstream user compliance, and the terms authorize OpenAI to take enforcement action including suspension or termination for violations, with an appeals process referenced for affected users. The document engages with regulatory frameworks governing AI systems, child safety, cybersecurity, and content moderation, including the EU AI Act, COPPA, the FTC Act, and emerging national AI governance requirements, though the applicability of specific obligations depends on the jurisdiction and whether a deployment qualifies as a high-risk AI use case. Compliance teams should note that the policy places affirmative due diligence obligations on API operators to restrict prohibited uses by their end users, which may require contract amendments, user consent audits, and content moderation infrastructure reviews.

Institutional Analysis

Institutional analysis available with Professional

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Start Professional free trial
High — 4 provisions
Medium — 3 provisions
Low — 1 provision

Monitoring

OpenAI has updated this document before.

Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →

Professional Governance Intelligence

Need provision-level monitoring and regulatory mapping?

Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.

Start Professional free trial

Cross-platform context

See how other platforms handle Absolute Prohibition on Child Sexual Abuse Material and similar clauses.

Compare across platforms →

Mapped Governance Frameworks

California AB 2013 AI Training Data Transparency
US-CA
View official text ↗
DMCA
United States Federal
View official text ↗
DSA
European Union
View official text ↗

Related Analysis

Privacy · May 3, 2026
OpenAI Privacy Policy Update May 2026: New Terms Authorize Advertiser Data Sharing

OpenAI expanded its data sharing terms to include third-party marketing partners. The updated policy authorizes the use of personal data fo…

Archival ProvenanceSource & Archival Record
Last Captured March 10, 2026 03:21 UTC
Capture Method Automated scheduled archival capture
Document ID CA-D-000005
Version ID CA-V-000067
SHA-256 286d3c66e538279cb66e79eb47315572fbb02f63eeeff315e58d03e10b01b3b4
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Hash verified

Governance Monitoring

Monitor governance changes across the platforms you rely on.

Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.

Create free account Compare plans