9 Total
4 High severity
3 Medium severity
2 Low severity
Summary

This is ElevenLabs' rules document for how you can and cannot use their AI voice cloning and text-to-speech services. The most significant restriction is that the policy prohibits cloning or synthesizing the voice of any real person without that person's explicit consent, and also bans generating audio designed to spread disinformation, harass individuals, or create non-consensual intimate content. If you use ElevenLabs to generate voices, you should be aware that the policy prohibits using synthetic audio to impersonate others or deceive audiences, and violations can result in account suspension or termination.

Technical / Legal Breakdown

This Acceptable Use Policy (AUP) governs permissible and prohibited uses of ElevenLabs' AI-powered voice synthesis and audio generation services, operating in conjunction with ElevenLabs' Terms of Service. The policy establishes that users are prohibited from generating content that impersonates real individuals without consent, produces non-consensual intimate audio, facilitates fraud or deception, spreads disinformation, or violates third-party intellectual property rights, and the terms authorize ElevenLabs to suspend or terminate accounts for violations. Notably, the policy includes explicit prohibitions on voice cloning of real individuals without verifiable consent and on generating synthetic media designed to deceive audiences about its AI-generated nature, provisions that are operationally distinct given the specific capabilities of the platform and the emerging regulatory environment around synthetic media. The policy engages GDPR and relevant EU AI Act provisions regarding high-risk AI systems and synthetic media disclosure obligations, as well as FTC guidance on deceptive practices and emerging state-level deepfake legislation in the United States; applicability of specific regulatory frameworks depends on user jurisdiction and the nature of the generated content. Material compliance considerations include the policy's treatment of consent verification for voice cloning, the scope of prohibited political content generation, and the adequacy of enforcement mechanisms relative to regulatory expectations under the EU AI Act and applicable state deepfake laws.

Institutional Analysis

Institutional analysis available with Professional

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Start Professional free trial
High — 4 provisions
Medium — 3 provisions
Low — 2 provisions

Monitoring

ElevenLabs has updated this document before.

Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →

Professional Governance Intelligence

Need provision-level monitoring and regulatory mapping?

Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.

Start Professional free trial

Cross-platform context

See how other platforms handle Disinformation and Deceptive Synthetic Media Prohibition and similar clauses.

Compare across platforms →

Mapped Governance Frameworks

California AB 2013 AI Training Data Transparency
US-CA
View official text ↗
ePrivacy Directive
European Union
View official text ↗
Archival ProvenanceSource & Archival Record
Last Captured May 11, 2026 11:17 UTC
Capture Method Automated scheduled archival capture
Document ID CA-D-000779
Version ID CA-V-002400
SHA-256 f7edcdd3410eeee4e3e63cc926e64439b2c2f8bbecd3171b03e1b56851537850
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Hash verified

Governance Monitoring

Monitor governance changes across the platforms you rely on.

Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.

Create free account Compare plans