OpenAI launched GPT-4o's voice features knowing that safety measures for audio were not yet fully complete, and intentionally limited access while they continue developing those protections.
Users who interact with GPT-4o through voice interfaces may be exposed to risks — including voice impersonation and emotionally manipulative outputs — that OpenAI has not yet fully mitigated, and the level of protection depends on which platform or operator they use.
Cross-platform context
See how other platforms handle Incomplete Audio Modality Mitigations at Launch and similar clauses.
Compare across platforms →This is a rare public admission by a major AI company that a consumer-facing feature was released with known, unresolved safety gaps — meaning users of voice features have less protection than users of text features.
(1) REGULATORY FRAMEWORK: This provision implicates FTC Act Section 5 (unfair or deceptive practices — deploying a product with known unmitigated harm vectors), EU AI Act Article 9 (risk management system obligations requiring iterative risk assessment throughout lifecycle), and EU AI Act Article 13 (transparency obligations requiring disclosure of known limitations). The FTC has enforcement authority in the US; the European AI Office oversees EU AI Act compliance. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.