You cannot use ElevenLabs to make a voice recording that sounds like a real person, such as a celebrity or public official, if the goal is to deceive or mislead people about who is speaking.
This analysis describes what ElevenLabs's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision directly addresses the most prominent misuse risk of AI voice synthesis: generating convincing audio impersonations of real people for fraud, disinformation, or reputational harm. The policy states that consent is required and that deceptive intent triggers the prohibition.
Interpretive note: The provision's application to parody, satire, or clearly labeled fictional content is not explicitly addressed, creating interpretive ambiguity at the margin.
The prohibition protects individuals, including public figures and private persons, from having their voice synthesized without consent for deceptive purposes; consumers who encounter AI-generated audio that impersonates a real person in a misleading way may report it to ElevenLabs under this policy.
Cross-platform context
See how other platforms handle Prohibition on Impersonation of Real Individuals and similar clauses.
Compare across platforms →Monitoring
ElevenLabs has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Users may not use ElevenLabs' platform to generate voice content that impersonates real individuals, including public figures, without their consent. This prohibition applies to content intended to deceive, defraud, or mislead audiences about the origin or authenticity of the voice.— Excerpt from ElevenLabs's ElevenLabs Safety Policy
REGULATORY LANDSCAPE: This provision engages the FTC Act's prohibition on unfair or deceptive practices, which the FTC has applied to AI-generated impersonation in commercial contexts. The EU AI Act's transparency requirements for AI-generated synthetic media are also relevant, as is the EU's proposed AI Liability Directive. State-level deepfake statutes in California (AB 2839 for electoral deepfakes; AB 602 for non-consensual intimate deepfakes), Texas, Virginia, and New York create additional exposure. The FTC and state attorneys general are the primary enforcement authorities in the US context. GOVERNANCE EXPOSURE: High. The impersonation prohibition is broad but relies on the qualifier of deceptive or misleading intent, which may create interpretive ambiguity in creative, parody, or satire contexts. Enterprise customers producing voice content featuring real individuals (e.g., in marketing, journalism, or entertainment) should assess whether their use cases fall within or outside this prohibition. JURISDICTION FLAGS: California's AB 2839 imposes specific restrictions on AI-generated audio of candidates in the 60 days before an election. The EU AI Act's limited-risk tier requires providers of AI systems generating synthetic audio to implement disclosure mechanisms. Political advertising use cases involving voice synthesis carry elevated regulatory exposure across multiple US states. CONTRACT AND VENDOR IMPLICATIONS: Enterprise agreements that include licensed use of public figure voices or branded spokesperson voice synthesis should be reviewed against this prohibition and against applicable right-of-publicity statutes. Indemnification clauses in enterprise contracts should address liability for impersonation-related claims. COMPLIANCE CONSIDERATIONS: Compliance teams should establish content review workflows for any production use of synthesized voices of identifiable real individuals, including documented consent records. Legal teams should monitor state deepfake legislation, which is evolving rapidly across the US, to ensure ongoing compliance.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision directly addresses the most prominent misuse risk of AI voice synthesis: generating convincing audio impersonations of real people for fraud, disinformation, or reputational harm. The policy states that consent is required and that deceptive intent triggers the prohibition.
The prohibition protects individuals, including public figures and private persons, from having their voice synthesized without consent for deceptive purposes; consumers who encounter AI-generated audio that impersonates a real person in a misleading way may report it to ElevenLabs under this policy.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by ElevenLabs.