10 Total
7 High severity
3 Medium severity
0 Low severity
Summary

This is OpenAI's official safety report for GPT-4o, their multimodal AI model that can see, hear, and speak in real time, and it explains what risks they identified and what they did — or didn't fully fix — before releasing it to the public. The most important thing for everyday users is that OpenAI acknowledges GPT-4o's voice features can mimic real people's voices and generate emotionally persuasive responses, and that safeguards against these risks were not fully complete at the time of release. If you use GPT-4o's voice or image features, be aware that the model may behave differently depending on which app or operator deploys it, and OpenAI's protections may not apply uniformly across all platforms.

Technical Summary

This document is the GPT-4o System Card published by OpenAI, a pre-deployment safety disclosure document governing the release of the GPT-4o multimodal AI model, operating under OpenAI's internal Preparedness Framework and voluntary AI safety commitments rather than a binding statutory legal basis. The most significant obligations it creates include commitments by OpenAI to conduct external red teaming, apply frontier risk evaluations across CBRN (chemical, biological, radiological, nuclear), cybersecurity, and persuasion risk categories, and implement model-level and policy-level mitigations before deployment. Notable provisions that deviate from industry standard include the explicit acknowledgment that GPT-4o's real-time audio and vision capabilities create novel risks around voice cloning, non-consensual intimate imagery generation, and emotional over-reliance on AI personas — risks that OpenAI concedes remain incompletely mitigated at launch. The document engages the EU AI Act (particularly high-risk system classification considerations), FTC consumer protection authority over deceptive AI practices, and emerging US federal AI governance frameworks; material compliance considerations include the absence of binding third-party audit obligations, incomplete mitigation disclosures for audio modality risks, and operator-mediated deployment structures that diffuse accountability across the API supply chain.

Evidence Provenance
Captured March 10, 2026 03:33 UTC
Document ID CA-D-000008
Version ID CA-V-000071
Wayback Machine View archived versions →
SHA-256 13469e1f569bac73628d7be62bc69800973adef5b79096ccd439344d4f658502
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Cryptographically signed
Institutional Analysis

🔒 Institutional analysis locked

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Upgrade to Professional — $149/mo
Change Timeline
View full version history (0 captures) →
High Severity — 7 provisions
Medium Severity — 3 provisions

Cross-platform context

See how other platforms handle CBRN Frontier Risk Evaluation and similar clauses.

Compare across platforms →

Applicable Regulations

EU AI Act
European Union
BIPA
Illinois, USA
CCPA/CPRA
California, USA
CFAA
United States Federal
CAN-SPAM
United States Federal
DMCA
United States Federal
DSA
European Union
GDPR
European Union
UK GDPR
United Kingdom