Track 1 platform and get the weekly governance digest. No credit card required.
This page describes what the document states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability may vary by jurisdiction. Methodology
This is ElevenLabs' rules document for how you can and cannot use their AI voice cloning and text-to-speech services. The most significant restriction is that the policy prohibits cloning or synthesizing the voice of any real person without that person's explicit consent, and also bans generating audio designed to spread disinformation, harass individuals, or create non-consensual intimate content. If you use ElevenLabs to generate voices, you should be aware that the policy prohibits using synthetic audio to impersonate others or deceive audiences, and violations can result in account suspension or termination.
This Acceptable Use Policy (AUP) governs permissible and prohibited uses of ElevenLabs' AI-powered voice synthesis and audio generation services, operating in conjunction with ElevenLabs' Terms of Service. The policy establishes that users are prohibited from generating content that impersonates real individuals without consent, produces non-consensual intimate audio, facilitates fraud or deception, spreads disinformation, or violates third-party intellectual property rights, and the terms authorize ElevenLabs to suspend or terminate accounts for violations. Notably, the policy includes explicit prohibitions on voice cloning of real individuals without verifiable consent and on generating synthetic media designed to deceive audiences about its AI-generated nature, provisions that are operationally distinct given the specific capabilities of the platform and the emerging regulatory environment around synthetic media. The policy engages GDPR and relevant EU AI Act provisions regarding high-risk AI systems and synthetic media disclosure obligations, as well as FTC guidance on deceptive practices and emerging state-level deepfake legislation in the United States; applicability of specific regulatory frameworks depends on user jurisdiction and the nature of the generated content. Material compliance considerations include the policy's treatment of consent verification for voice cloning, the scope of prohibited political content generation, and the adequacy of enforcement mechanisms relative to regulatory expectations under the EU AI Act and applicable state deepfake laws.
Institutional analysis available with Professional
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Start Professional free trialMonitoring
ElevenLabs has updated this document before.
Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
Professional Governance Intelligence
Need provision-level monitoring and regulatory mapping?
Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.
Start Professional free trialCross-platform context
See how other platforms handle Disinformation and Deceptive Synthetic Media Prohibition and similar clauses.
Compare across platforms →Governance Monitoring
Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.