Create cyberweapons or malicious code that could cause significant damage if deployed.
Given that AI models can generate functional code, this prohibition addresses a specific and serious risk of OpenAI tools being weaponized for cybercrime — violations implicate federal computer fraud law.
OpenAI's Usage Policy defines hard limits on what consumers and developers can request from ChatGPT, Sora, Codex, and the API — including absolute bans on generating CSAM, weapons-related content, and content designed to undermine AI safety systems. Violations can result in immediate, unilateral account suspension or termination, which for paying subscribers or businesses relying on API access represents a material service disruption risk. You can review the appeals process for enforcement actions at https://openai.com/transparency-and-content-moderation.