OpenAI · Usage Policies · View original document ↗

Prohibition on Weapons of Mass Destruction Assistance

High severity Medium confidence Inferredfromcontext Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Recent governance activity OpenAI recorded 7 documented changes in the last 30 days.
Start monitoring updates
Monitor governance changes for OpenAI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

The policy prohibits using OpenAI services to provide assistance in creating biological, chemical, nuclear, or radiological weapons capable of mass casualties.

This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This restriction applies across all products and APIs and cannot be overridden by operator configuration, reflecting an absolute safety boundary that OpenAI's model alignment is also designed to enforce.

Interpretive note: Verbatim text could not be extracted from the binary PDF. The provision is inferred from document metadata and publicly available OpenAI Usage Policy language consistent with this document version.

Consumer impact (what this means for users)

Users who attempt to use ChatGPT or the API to obtain technical guidance on creating weapons capable of mass casualties are in violation of this policy and subject to account termination, and such attempts may be flagged to relevant authorities.

How other platforms handle this

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

Mistral AI Medium

Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...

Perplexity AI Medium

You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.

See all platforms with this clause type →

Monitoring

OpenAI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

1. REGULATORY LANDSCAPE: This provision engages export control frameworks including the Export Administration Regulations (EAR) administered by the Bureau of Industry and Security (BIS), the International Traffic in Arms Regulations (ITAR), and the Chemical Weapons Convention Implementation Act. Dual-use research of concern (DURC) frameworks administered by the NIH and biosafety regulators are also implicated for biological weapon assistance. The FTC Act is relevant to the extent that misrepresentation of safety capabilities would constitute an unfair or deceptive practice. 2. GOVERNANCE EXPOSURE: High. Operators deploying OpenAI models in research, defense, or biotechnology contexts must implement layered content filters specifically targeting requests for weapons-of-mass-destruction-relevant technical information. Failure to do so could constitute a violation of export control law independent of this policy. 3. JURISDICTION FLAGS: This obligation applies globally, with heightened exposure in the US (EAR, ITAR), EU (dual-use goods regulation), and jurisdictions that are signatories to the Biological Weapons Convention and Chemical Weapons Convention. Research institutions and defense contractors face the highest exposure. 4. CONTRACT AND VENDOR IMPLICATIONS: Procurement teams in defense, research, and biotech sectors should conduct specific due diligence on OpenAI's safety filter capabilities, including whether operator-level system prompts can inadvertently bypass these restrictions. Contracts should include representations about compliance with applicable export control laws. 5. COMPLIANCE CONSIDERATIONS: Organizations deploying OpenAI in research contexts should conduct a dual-use research risk assessment. Compliance programs should include training on prohibited queries and monitoring for attempts to extract weapons-relevant information through prompt engineering.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has jurisdiction over consumer protection and deceptive practices related to AI safety representations by technology companies
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Usage Policies
Entity
OpenAI
Document last updated
March 5, 2026
Tracking information
First tracked
March 10, 2026
Last verified
May 12, 2026
Record ID
CA-P-009454
Document ID
CA-D-00005
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
d69a24617758e5b44e4be8eedeceb598a26dc4e280f2ab1469a45b64203e7403
Analysis generated
March 10, 2026 03:28 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: OpenAI
Document: Usage Policies
Record ID: CA-P-009454
Captured: 2026-03-10 03:28:59 UTC
SHA-256: d69a24617758e5b4…
URL: https://conductatlas.com/platform/openai/usage-policies/prohibition-on-weapons-of-mass-destruction-assistance/
Accessed: May 15, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does OpenAI's Prohibition on Weapons of Mass Destruction Assistance clause do?

This restriction applies across all products and APIs and cannot be overridden by operator configuration, reflecting an absolute safety boundary that OpenAI's model alignment is also designed to enforce.

How does this clause affect you?

Users who attempt to use ChatGPT or the API to obtain technical guidance on creating weapons capable of mass casualties are in violation of this policy and subject to account termination, and such attempts may be flagged to relevant authorities.

Is ConductAtlas affiliated with OpenAI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.