9 Total
5 High severity
3 Medium severity
1 Low severity
Summary

This is OpenAI's rulebook for how anyone — individuals, businesses, and developers — is allowed to use ChatGPT, the API, Sora, and other OpenAI tools. The most important thing for everyday users is that OpenAI can suspend or terminate your account if it determines you have violated these rules, including a broad list of prohibited activities ranging from generating violent content to attempting to undermine AI oversight systems. If you believe your account was actioned unfairly, OpenAI provides an appeals process linked from its Transparency and Content Moderation page.

Technical Summary

This document is OpenAI's Usage Policy, governing acceptable and prohibited uses of its AI models, APIs, and consumer products (ChatGPT, Sora, Codex, and related services), and operates as a binding behavioral contract supplementing OpenAI's Terms of Service. The policy creates affirmative obligations on all users and API operators to prevent specified categories of harmful outputs, including weapons of mass destruction assistance, CSAM, critical infrastructure attacks, and AI-generated influence operations, and grants OpenAI unilateral enforcement authority including account suspension. Notably, the policy establishes a tiered operator-user trust model in which API operators may expand or restrict default model behaviors for downstream users, creating indirect liability exposure for developers who misconfigure or fail to adequately constrain their deployments. The policy engages the EU AI Act (particularly prohibited AI practices under Article 5 and high-risk system obligations), FTC Act Section 5 unfair or deceptive practices authority, COPPA given age-related restrictions, and CSAM-related federal law (18 U.S.C. § 2256 et seq.); compliance teams should note that the operator accountability framework may trigger platform liability analysis under evolving AI-specific regulatory regimes across the EU, UK, and US states. Material compliance considerations include the absence of a defined policy update notification mechanism, ambiguity around operator audit rights, and the policy's incorporation of a living 'model spec' document that can alter behavioral constraints without formal versioning.

Evidence Provenance
Captured March 10, 2026 03:21 UTC
Document ID CA-D-000005
Version ID CA-V-000067
Wayback Machine View archived versions →
SHA-256 286d3c66e538279cb66e79eb47315572fbb02f63eeeff315e58d03e10b01b3b4
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Cryptographically signed
Institutional Analysis

🔒 Institutional analysis locked

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Upgrade to Professional — $149/mo
Change Timeline
View full version history (0 captures) →
High Severity — 5 provisions
Medium Severity — 3 provisions
Low Severity — 1 provision

Cross-platform context

See how other platforms handle Absolute Prohibition on CSAM and similar clauses.

Compare across platforms →

Applicable Regulations

EU AI Act
European Union
BIPA
Illinois, USA
CCPA/CPRA
California, USA
CFAA
United States Federal
CAN-SPAM
United States Federal
DMCA
United States Federal
DSA
European Union
GDPR
European Union
UK GDPR
United Kingdom