ElevenLabs · ElevenLabs Safety Policy · View original document ↗

Prohibition on Impersonation of Real Individuals

High severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for ElevenLabs Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

You cannot use ElevenLabs to make a voice recording that sounds like a real person, such as a celebrity or public official, if the goal is to deceive or mislead people about who is speaking.

This analysis describes what ElevenLabs's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision directly addresses the most prominent misuse risk of AI voice synthesis: generating convincing audio impersonations of real people for fraud, disinformation, or reputational harm. The policy states that consent is required and that deceptive intent triggers the prohibition.

Interpretive note: The provision's application to parody, satire, or clearly labeled fictional content is not explicitly addressed, creating interpretive ambiguity at the margin.

Consumer impact (what this means for users)

The prohibition protects individuals, including public figures and private persons, from having their voice synthesized without consent for deceptive purposes; consumers who encounter AI-generated audio that impersonates a real person in a misleading way may report it to ElevenLabs under this policy.

Cross-platform context

See how other platforms handle Prohibition on Impersonation of Real Individuals and similar clauses.

Compare across platforms →

Monitoring

ElevenLabs has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Users may not use ElevenLabs' platform to generate voice content that impersonates real individuals, including public figures, without their consent. This prohibition applies to content intended to deceive, defraud, or mislead audiences about the origin or authenticity of the voice.

— Excerpt from ElevenLabs's ElevenLabs Safety Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: This provision engages the FTC Act's prohibition on unfair or deceptive practices, which the FTC has applied to AI-generated impersonation in commercial contexts. The EU AI Act's transparency requirements for AI-generated synthetic media are also relevant, as is the EU's proposed AI Liability Directive. State-level deepfake statutes in California (AB 2839 for electoral deepfakes; AB 602 for non-consensual intimate deepfakes), Texas, Virginia, and New York create additional exposure. The FTC and state attorneys general are the primary enforcement authorities in the US context. GOVERNANCE EXPOSURE: High. The impersonation prohibition is broad but relies on the qualifier of deceptive or misleading intent, which may create interpretive ambiguity in creative, parody, or satire contexts. Enterprise customers producing voice content featuring real individuals (e.g., in marketing, journalism, or entertainment) should assess whether their use cases fall within or outside this prohibition. JURISDICTION FLAGS: California's AB 2839 imposes specific restrictions on AI-generated audio of candidates in the 60 days before an election. The EU AI Act's limited-risk tier requires providers of AI systems generating synthetic audio to implement disclosure mechanisms. Political advertising use cases involving voice synthesis carry elevated regulatory exposure across multiple US states. CONTRACT AND VENDOR IMPLICATIONS: Enterprise agreements that include licensed use of public figure voices or branded spokesperson voice synthesis should be reviewed against this prohibition and against applicable right-of-publicity statutes. Indemnification clauses in enterprise contracts should address liability for impersonation-related claims. COMPLIANCE CONSIDERATIONS: Compliance teams should establish content review workflows for any production use of synthesized voices of identifiable real individuals, including documented consent records. Legal teams should monitor state deepfake legislation, which is evolving rapidly across the US, to ensure ongoing compliance.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over AI-generated impersonation used in deceptive or fraudulent commercial contexts under the FTC Act
    File a complaint →

Provision details

Document information
Document
ElevenLabs Safety Policy
Entity
ElevenLabs
Document last updated
May 12, 2026
Tracking information
First tracked
May 12, 2026
Last verified
May 12, 2026
Record ID
CA-P-012011
Document ID
CA-D-00833
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
b0b41cc06f252ab010e962f89a076fb511fcaecb58e9679d339728b7264dae47
Analysis generated
May 12, 2026 17:04 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: ElevenLabs
Document: ElevenLabs Safety Policy
Record ID: CA-P-012011
Captured: 2026-05-12 17:04:27 UTC
SHA-256: b0b41cc06f252ab0…
URL: https://conductatlas.com/platform/elevenlabs/elevenlabs-safety-policy/prohibition-on-impersonation-of-real-individuals/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does ElevenLabs's Prohibition on Impersonation of Real Individuals clause do?

This provision directly addresses the most prominent misuse risk of AI voice synthesis: generating convincing audio impersonations of real people for fraud, disinformation, or reputational harm. The policy states that consent is required and that deceptive intent triggers the prohibition.

How does this clause affect you?

The prohibition protects individuals, including public figures and private persons, from having their voice synthesized without consent for deceptive purposes; consumers who encounter AI-generated audio that impersonates a real person in a misleading way may report it to ElevenLabs under this policy.

Is ConductAtlas affiliated with ElevenLabs?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by ElevenLabs.