OpenAI · GPT-4o System Card (PDF)

External Red Teaming Disclosure

Medium severity
Share 𝕏 Share in Share 🔒 PDF

What it is

Before releasing GPT-4o, OpenAI paid more than 100 outside experts to try to find ways the AI could be misused or cause harm — and this document summarizes what they found.

Consumer impact (what this means for users)

OpenAI's use of external red teaming shows a genuine safety process, but the system card also reveals that testers found real vulnerabilities that were not fully resolved before public release, meaning users may encounter harmful outputs that the testing process identified but mitigations did not eliminate.

Cross-platform context

See how other platforms handle External Red Teaming Disclosure and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

External red teaming is a meaningful safety practice, but the fact that risks were found and some remain unmitigated at launch means users are, in effect, participating in ongoing safety discovery — not just using a fully validated product.

View original clause language
Prior to the release of GPT-4o, we engaged 100+ external red teamers across many different domains to help discover and evaluate risks in the new model... Red teamers were asked to probe the model for behaviors that violate our policies or that could be harmful. This included testing for harmful content generation, dangerous or regulated information, privacy violations, and other areas of potential misuse.

Institutional analysis (Compliance & legal intelligence)

(1) REGULATORY FRAMEWORK: This provision is relevant to EU AI Act Article 9 (risk management system) and Article 10 (data and governance requirements), NIST AI RMF (Measure function, particularly MV-2.2 on red-teaming), and NIST SP 800-53 security assessment controls for AI systems. The EU AI Office and NIST provide the primary standards frameworks; no single regulator directly mandates red teaming at this level. (2)

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Provision details

Document information
Document
GPT-4o System Card (PDF)
Entity
OpenAI
Document last updated
March 5, 2026
Tracking information
First tracked
March 10, 2026
Last verified
April 27, 2026
Record ID
CA-P-003147
Document ID
CA-D-00008
Evidence Provenance
Source URL
Wayback Machine
SHA-256
7c23ef53467eea199596abe78511d57ffee1e94b50ef10ac0f7d81df278b5059
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: OpenAI | Document: GPT-4o System Card (PDF) | Record: CA-P-003147
Captured: 2026-03-10 03:40:55 UTC | SHA-256: 7c23ef53467eea19…
URL: https://conductatlas.com/platform/openai/gpt-4o-system-card-pdf/external-red-teaming-disclosure/
Accessed: May 2, 2026
Classification
Severity
Medium
Categories

Other provisions in this document