Before releasing GPT-4o, OpenAI paid more than 100 outside experts to try to find ways the AI could be misused or cause harm — and this document summarizes what they found.
OpenAI's use of external red teaming shows a genuine safety process, but the system card also reveals that testers found real vulnerabilities that were not fully resolved before public release, meaning users may encounter harmful outputs that the testing process identified but mitigations did not eliminate.
Cross-platform context
See how other platforms handle External Red Teaming Disclosure and similar clauses.
Compare across platforms →External red teaming is a meaningful safety practice, but the fact that risks were found and some remain unmitigated at launch means users are, in effect, participating in ongoing safety discovery — not just using a fully validated product.
(1) REGULATORY FRAMEWORK: This provision is relevant to EU AI Act Article 9 (risk management system) and Article 10 (data and governance requirements), NIST AI RMF (Measure function, particularly MV-2.2 on red-teaming), and NIST SP 800-53 security assessment controls for AI systems. The EU AI Office and NIST provide the primary standards frameworks; no single regulator directly mandates red teaming at this level. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.