Engage or facilitate actions that meaningfully undermine the ability of legitimate principals to oversee and correct advanced AI models.
This clause is unusual in the industry and reflects OpenAI's specific safety philosophy — it creates enforceable obligations around AI governance and model safety that go beyond typical content moderation policies.
OpenAI's Usage Policy defines hard limits on what consumers and developers can request from ChatGPT, Sora, Codex, and the API — including absolute bans on generating CSAM, weapons-related content, and content designed to undermine AI safety systems. Violations can result in immediate, unilateral account suspension or termination, which for paying subscribers or businesses relying on API access represents a material service disruption risk. You can review the appeals process for enforcement actions at https://openai.com/transparency-and-content-moderation.