OpenAI's own safety testers found that GPT-4o's voice feature could be used to impersonate real people, which is why voice output is currently restricted — but these restrictions may not apply in all deployment contexts.
Your voice and likeness could be at risk if GPT-4o is used by bad actors to generate convincing audio impersonations; currently OpenAI limits voice output to preset voices, but this restriction is not permanent and may not apply to all operator deployments.
Cross-platform context
See how other platforms handle Voice Impersonation and Cloning Risk and similar clauses.
Compare across platforms →Voice cloning and impersonation capabilities in a widely deployed AI system create serious risks for fraud, non-consensual audio deepfakes, and reputational harm — and OpenAI's red team confirmed these risks exist in GPT-4o.
(1) REGULATORY FRAMEWORK: This provision implicates the FTC Act Section 5 (impersonation as unfair or deceptive practice), the FTC's 2024 impersonation rule (16 CFR Part 461), state right-of-publicity laws (California Civil Code §3344, New York Civil Rights Law §50-51), the EU AI Act Article 52 (transparency obligations for AI-generated audio content), and the DEEPFAKES Accountability Act (proposed but not enacted). The FTC has primary enforcement authority in the US. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.