9 Total
4 High severity
3 Medium severity
2 Low severity
Summary

This is OpenAI's Usage Policy — the rulebook that tells everyone using ChatGPT, the OpenAI API, Sora, or Codex what they are and are not allowed to do with these AI tools. The single most important thing for everyday users is that OpenAI can suspend or terminate your account if it determines you have violated these rules, including a wide range of prohibited activities from generating harmful content to attempting to manipulate the AI's safety systems. If your account is suspended, OpenAI provides an appeals process you can access through its transparency and content moderation page at openai.com/transparency-and-content-moderation.

Technical Summary

This document is OpenAI's Usage Policy, which governs the permissible and prohibited uses of OpenAI's AI models, APIs, and products (including ChatGPT, Sora, and Codex), establishing contractual obligations enforceable through OpenAI's Terms of Service. The most significant obligations include absolute prohibitions on using OpenAI systems to generate child sexual abuse material (CSAM), develop weapons of mass destruction, undermine AI oversight mechanisms, or engage in cyberattacks — violations of which result in account termination and potential law enforcement referral. Notably, the policy creates a tiered operator-user trust architecture whereby API operators can expand or restrict default model behaviors for end users, creating downstream liability exposure for third-party developers building on the platform. The document engages the EU AI Act (particularly prohibited AI practices under Article 5), FTC Act Section 5 unfair or deceptive practices standards, COPPA given age-restriction provisions, and CSAM-related federal law (18 U.S.C. § 2256 et seq.); compliance teams deploying OpenAI APIs must conduct vendor risk assessments to ensure their operator-level configurations do not inadvertently enable prohibited uses. OpenAI reserves unilateral enforcement discretion, including account suspension, with an appeals process referenced but not fully detailed in this document, creating residual due process risk for enterprise customers.

Institutional Analysis

1) REGULATORY EXPOSURE: This policy directly engages the EU AI Act Article 5 (prohibited AI system practices, including manipulation and exploitation of vulnerabilities), FTC Act Section 5 (unfair or…

1) REGULATORY EXPOSURE: This policy directly engages the EU AI Act Article 5 (prohibited AI system practices, including manipulation and exploitation of vulnerabilities), FTC Act Section 5 (unfair or deceptive practices, particularly relevant to operator misrepresentation of AI capabilities), COPPA…

🔒

Compliance intelligence locked

Regulatory exposure, material risk, and due diligence action items.

Evidence Provenance
Captured March 10, 2026 03:21 UTC
Document ID CA-D-000005
Version ID CA-V-000067
Wayback Machine View archived versions →
SHA-256 286d3c66e538279cb66e79eb47315572fbb02f63eeeff315e58d03e10b01b3b4
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Cryptographically signed
Change Timeline
High Severity — 4 provisions
Medium Severity — 3 provisions
Low Severity — 2 provisions