OpenAI tested whether GPT-4o could help someone create weapons of mass destruction and concluded the risk is 'medium' — serious enough to track closely but not yet at the level that would have blocked the model's release.
While this risk primarily affects society at large rather than individual users, it signals that GPT-4o has capabilities that could be misused for serious harm, and OpenAI's internal threshold — not a regulatory one — is the only current gate on deployment.
Cross-platform context
See how other platforms handle CBRN Frontier Risk Evaluation and similar clauses.
Compare across platforms →A 'medium' CBRN risk rating under OpenAI's own framework means GPT-4o can provide meaningful assistance to individuals attempting to create weapons capable of mass casualties — this is a significant public safety disclosure.
(1) REGULATORY FRAMEWORK: This provision implicates US Executive Order 14110 on Safe, Secure, and Trustworthy AI (dual-use foundation model reporting requirements), the EU AI Act Article 5 (prohibited AI practices that risk public safety), and potentially the Biological Weapons Anti-Terrorism Act and Chemical Weapons Convention domestic implementing statutes. The Department of Homeland Security and the National Security Council have oversight interests. The EU AI Office has enforcement authority for prohibited practices under the EU AI Act. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.