OpenAI · Usage Policies

Prohibition on Cyberattack Facilitation

High severity
Share 𝕏 Share in Share

What it is

Create cyberweapons or malicious code that could cause significant damage if deployed.

Why it matters

Given that AI models can generate functional code, this prohibition addresses a specific and serious risk of OpenAI tools being weaponized for cybercrime — violations implicate federal computer fraud law.

Consumer impact

OpenAI's Usage Policy defines hard limits on what consumers and developers can request from ChatGPT, Sora, Codex, and the API — including absolute bans on generating CSAM, weapons-related content, and content designed to undermine AI safety systems. Violations can result in immediate, unilateral account suspension or termination, which for paying subscribers or businesses relying on API access represents a material service disruption risk. You can review the appeals process for enforcement actions at https://openai.com/transparency-and-content-moderation.

Applicable agencies

  • FTC
    FTC has authority over unfair or deceptive practices involving cybersecurity failures and can act against platforms that inadequately prevent malicious code generation.
    File a complaint →

Provision details

Document information
Document
Usage Policies
Entity
OpenAI
Document last updated
March 5, 2026
Tracking information
First tracked
March 10, 2026
Last verified
April 4, 2026
Record ID
CA-P-001978
Document ID
CA-D-00005
Evidence Provenance
Source URL
Wayback Machine
SHA-256
d69a24617758e5b44e4be8eedeceb598a26dc4e280f2ab1469a45b64203e7403
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: OpenAI | Document: Usage Policies | Record: CA-P-001978
Captured: 2026-03-10 03:28:59 UTC | SHA-256: d69a24617758e5b4…
URL: https://conductatlas.com/platform/openai/usage-policies/prohibition-on-cyberattack-facilitation/
Accessed: April 4, 2026
Classification
Severity
High
Categories

Other provisions in this document