OpenAI prohibits using its tools to produce content that encourages or celebrates real-world violence, terrorism, or attacks on infrastructure, and prohibits providing operational planning assistance to those conducting such attacks.
This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This prohibition applies to both direct incitement and operational assistance, covering a range of content from propaganda to logistics support for violent acts, and applies regardless of whether the requester claims an educational, journalistic, or creative purpose.
Interpretive note: The document does not fully define the line between prohibited incitement or operational assistance and permissible educational, journalistic, or fictional discussion of violence, leaving interpretive ambiguity for edge cases.
Users engaged in fiction writing, journalism, security research, or academic study touching on terrorism or political violence may operate near the boundary of this provision; the policy does not exhaustively define what distinguishes prohibited incitement from permitted discussion, analysis, or fictional depiction of violence.
How other platforms handle this
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...
You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.
Monitoring
OpenAI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Generate content that incites, glorifies, or celebrates real-world violence or terrorist acts, or provide operational assistance to those planning attacks on people or infrastructure.— Excerpt from OpenAI's OpenAI Usage Policies
(1) REGULATORY LANDSCAPE: This provision engages with US material support for terrorism statutes (18 U.S.C. § 2339A and § 2339B), EU Directive on combating terrorism, UN Security Council Resolution 2178 on foreign terrorist fighters, and the EU AI Act's prohibited use categories regarding AI used to facilitate terrorism. The Department of Justice and international law enforcement agencies have jurisdiction over material support and incitement offenses. (2) GOVERNANCE EXPOSURE: Medium to High for operators in news media, security research, gaming, and entertainment sectors that generate content involving violence or political extremism. The prohibition on 'operational assistance' to those planning attacks introduces a higher-severity threshold than general content restrictions, but the line between discussing attack methodologies in educational contexts and providing operational uplift requires case-by-case judgment. (3) JURISDICTION FLAGS: EU operators face obligations under the EU AI Act and the Terrorism Directive. UK operators face obligations under the Terrorism Act 2006 regarding encouragement of terrorism. Operators serving users in jurisdictions with broad counter-terrorism statutes should assess whether AI-generated content discussing political violence could trigger legal exposure. (4) CONTRACT AND VENDOR IMPLICATIONS: Media organizations, security research firms, and entertainment companies using OpenAI via API should document editorial review processes for AI-generated content involving violence or terrorism; assess whether their use cases constitute permissible journalistic, academic, or creative use; and ensure that client or user terms of service prohibit using AI-generated content for operational violence facilitation. (5) COMPLIANCE CONSIDERATIONS: Operators should implement human editorial review for content involving violence facilitation themes; consult legal counsel on material support statute applicability to AI-assisted content; establish clear internal policies distinguishing permissible editorial use from prohibited operational assistance; and monitor regulatory developments regarding AI-generated extremist content.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This prohibition applies to both direct incitement and operational assistance, covering a range of content from propaganda to logistics support for violent acts, and applies regardless of whether the requester claims an educational, journalistic, or creative purpose.
Users engaged in fiction writing, journalism, security research, or academic study touching on terrorism or political violence may operate near the boundary of this provision; the policy does not exhaustively define what distinguishes prohibited incitement from permitted discussion, analysis, or fictional depiction of violence.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.