You cannot use ElevenLabs to create a voice that impersonates a real person in a way that could deceive people or harm that person's reputation.
This analysis describes what ElevenLabs's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
Impersonation using AI voice technology creates direct fraud, defamation, and reputational harm risks; this provision establishes ElevenLabs' policy position and sets user liability for such uses.
Interpretive note: The policy does not address satire or parody carve-outs, leaving the boundary between prohibited impersonation and permissible expressive use ambiguous.
Users who create synthetic audio that impersonates identifiable real individuals in a deceptive or harmful way are in violation of this provision and face account termination, as well as potential civil liability for defamation, fraud, or identity misrepresentation.
Cross-platform context
See how other platforms handle Impersonation of Real Individuals and similar clauses.
Compare across platforms →Monitoring
ElevenLabs has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"You may not use the Services to impersonate any real person, including public figures, in a manner that is likely to deceive others or cause harm to that individual's reputation or safety.— Excerpt from ElevenLabs's ElevenLabs Usage Policy
(1) REGULATORY LANDSCAPE: This provision engages common law defamation and right of publicity doctrines, state identity fraud and impersonation statutes, the FTC Act's prohibition on deceptive practices, and the EU AI Act's transparency obligations for AI-generated content that could be mistaken for authentic speech by real individuals. Platform liability frameworks under Section 230 (US) and the DSA (EU) are also relevant to how ElevenLabs' own exposure is framed. (2) GOVERNANCE EXPOSURE: Medium to High. The prohibition on harmful impersonation is broadly consistent with legal requirements, but the policy's use of 'likely to deceive' and 'cause harm' introduces an interpretive standard that may be contested in enforcement scenarios, particularly where satirical or parodic content is involved. (3) JURISDICTION FLAGS: Right of publicity claims are strongest in California and New York. Defamation exposure varies significantly by jurisdiction. EU users benefit from stronger personal data and dignity protections. The policy does not carve out satire or parody, which may create tension with free speech principles in US jurisdictions. (4) CONTRACT AND VENDOR IMPLICATIONS: B2B customers should assess whether their products permit user-generated content involving impersonation and implement moderation controls accordingly. The AUP places liability on the end user, but platform-level due diligence may be expected by regulators in some jurisdictions. (5) COMPLIANCE CONSIDERATIONS: Legal teams should evaluate whether the policy's lack of a satire or parody carve-out creates operational friction for legitimate creative use cases, and may wish to seek clarification from ElevenLabs on permissible expressive uses of voice synthesis involving public figures.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
Impersonation using AI voice technology creates direct fraud, defamation, and reputational harm risks; this provision establishes ElevenLabs' policy position and sets user liability for such uses.
Users who create synthetic audio that impersonates identifiable real individuals in a deceptive or harmful way are in violation of this provision and face account termination, as well as potential civil liability for defamation, fraud, or identity misrepresentation.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by ElevenLabs.