OpenAI · OpenAI Usage Policies · View original document ↗

Weapons of Mass Destruction Prohibition

High severity Medium confidence Explicitdocumentlanguage Rare · 1 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Recent governance activity OpenAI recorded 5 documented changes in the last 30 days.
Start monitoring updates
Monitor governance changes for OpenAI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

OpenAI prohibits using its tools to meaningfully assist anyone attempting to develop biological, chemical, nuclear, or radiological weapons capable of causing mass casualties, regardless of the stated purpose.

This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision addresses one of the highest-risk potential misuses of generative AI, and its scope covers not just direct weapons synthesis but any assistance that provides 'serious uplift' — a term that implies a meaningful capability increase — to someone pursuing such weapons.

Interpretive note: The term 'serious uplift' is not defined with precision in the document, creating interpretive ambiguity about where the line falls between prohibited assistance and permissible educational or research discussion.

Consumer impact (what this means for users)

Users and operators may not use OpenAI products to provide meaningful technical assistance in developing weapons of mass destruction; the term 'serious uplift' indicates the prohibition covers substantive capability assistance, not merely discussing these topics in educational or historical contexts, though the precise boundary of this distinction is not fully defined in the document.

How other platforms handle this

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

Mistral AI Medium

Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...

Perplexity AI Medium

You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.

See all platforms with this clause type →

Monitoring

OpenAI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Provide serious uplift to those seeking to create biological, chemical, nuclear, or radiological weapons with the potential for mass casualties

— Excerpt from OpenAI's OpenAI Usage Policies

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: This provision engages with US export control law (Export Administration Regulations and International Traffic in Arms Regulations), the Biological Weapons Anti-Terrorism Act, Chemical Weapons Convention Implementation Act, and equivalent international treaties and national statutes. The Department of Justice, Department of Commerce Bureau of Industry and Security, and State Department Directorate of Defense Trade Controls have relevant enforcement authority. The EU AI Act classifies AI systems posing unacceptable risk in national security contexts as prohibited. (2) GOVERNANCE EXPOSURE: High. The 'serious uplift' standard requires interpretive judgment about what constitutes meaningful capability enhancement versus general scientific information, creating potential compliance ambiguity for operators in research, defense, academic, and dual-use technology sectors. (3) JURISDICTION FLAGS: US export control laws apply extraterritorially and may affect non-US operators using OpenAI services to conduct dual-use research. EU operators should assess this provision under the EU AI Act's prohibited use categories. Academic and research institutions in particular should evaluate whether their use cases approach the 'serious uplift' threshold. (4) CONTRACT AND VENDOR IMPLICATIONS: Defense contractors, academic research institutions, and life sciences companies deploying OpenAI via API should conduct specific legal review of whether their intended use cases could be characterized as providing 'serious uplift' under this provision, and document that review in their vendor risk assessments. (5) COMPLIANCE CONSIDERATIONS: Operators in dual-use research sectors should establish internal review protocols for AI-assisted research outputs, consult with export control counsel regarding their specific use cases, and consider whether their terms of service with end users adequately address this restriction.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over unfair or deceptive practices related to AI service policies, including failure to enforce stated safety restrictions.
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
OpenAI Usage Policies
Entity
OpenAI
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 12, 2026
Record ID
CA-P-011723
Document ID
CA-D-00753
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
7bc76af79d3d7702e7ce284199b0b15a9dc7dd89f62958bd0823240c00eaab06
Analysis generated
May 11, 2026 12:43 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: OpenAI
Document: OpenAI Usage Policies
Record ID: CA-P-011723
Captured: 2026-05-11 12:43:28 UTC
SHA-256: 7bc76af79d3d7702…
URL: https://conductatlas.com/platform/openai/openai-usage-policies/weapons-of-mass-destruction-prohibition/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does OpenAI's Weapons of Mass Destruction Prohibition clause do?

This provision addresses one of the highest-risk potential misuses of generative AI, and its scope covers not just direct weapons synthesis but any assistance that provides 'serious uplift' — a term that implies a meaningful capability increase — to someone pursuing such weapons.

How does this clause affect you?

Users and operators may not use OpenAI products to provide meaningful technical assistance in developing weapons of mass destruction; the term 'serious uplift' indicates the prohibition covers substantive capability assistance, not merely discussing these topics in educational or historical contexts, though the precise boundary of this distinction is not fully defined in the document.

How many platforms have this type of clause?

ConductAtlas has identified this type of provision across 1 platforms. See the full comparison.

Is ConductAtlas affiliated with OpenAI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.