Influence operations — such as building tools designed to run influence operations, generating content designed to sow division or used for political ads, propaganda, or targeting strategies based on political ideology.
With AI-generated content increasingly implicated in election interference, this clause reflects OpenAI's attempt to limit liability and comply with emerging electoral integrity regulations globally.
OpenAI's Usage Policy defines hard limits on what consumers and developers can request from ChatGPT, Sora, Codex, and the API — including absolute bans on generating CSAM, weapons-related content, and content designed to undermine AI safety systems. Violations can result in immediate, unilateral account suspension or termination, which for paying subscribers or businesses relying on API access represents a material service disruption risk. You can review the appeals process for enforcement actions at https://openai.com/transparency-and-content-moderation.