OpenAI · GPT-4o System Card (PDF)

External Red Teaming and Pre-Deployment Evaluation

Low severity
Share 𝕏 Share in Share

Why it matters

External red teaming is considered a best practice in AI safety, and its inclusion demonstrates a meaningful (if not independently verified) pre-deployment safety process — though the selection of red teamers and scope of testing remains at OpenAI's discretion.

Consumer impact

GPT-4o's system card discloses that the model's expressive audio capabilities create risks of emotional dependency, sycophantic reinforcement of user beliefs, and potential manipulation — risks OpenAI acknowledges but has not fully resolved at launch. Users interacting with voice mode may receive outputs calibrated to sound emotionally resonant, which can subtly influence decision-making and foster over-reliance on the AI. You can reduce these risks by using text mode instead of voice mode and by independently verifying any important advice or information GPT-4o provides.

Provision details

Document information
Document
GPT-4o System Card (PDF)
Entity
OpenAI
Document last updated
March 5, 2026
Tracking information
First tracked
March 10, 2026
Last verified
March 31, 2026
Record ID
CA-P-000071
Document ID
CA-D-00008
Evidence Provenance
Source URL
Wayback Machine
SHA-256
7c23ef53467eea199596abe78511d57ffee1e94b50ef10ac0f7d81df278b5059
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: OpenAI | Document: GPT-4o System Card (PDF) | Record: CA-P-000071
Captured: 2026-03-10 03:40:55 UTC | SHA-256: 7c23ef53467eea19…
URL: https://conductatlas.com/platform/openai/gpt-4o-system-card-pdf/external-red-teaming-and-pre-deployment-evaluation/
Accessed: April 4, 2026
Classification
Severity
Low
Categories

Other provisions in this document