Track 1 platform and get the weekly governance digest. No credit card required.
This page describes what the document states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability may vary by jurisdiction. Methodology
This is ElevenLabs' public statement of rules for how its AI voice-generation technology may and may not be used. The most significant provision states that cloning another person's voice requires that person's explicit consent, and that ElevenLabs prohibits generating voice content to impersonate real people, spread disinformation, or facilitate fraud. If you use ElevenLabs to clone a voice, you should obtain and retain documented consent from the person whose voice you are cloning before creating or distributing that content.
This document is ElevenLabs' Responsible AI policy, a voluntary governance framework governing the acceptable use of its AI-powered voice synthesis and text-to-speech platform; it does not carry explicit statutory legal force but serves as a public commitment and is incorporated by reference into ElevenLabs' Terms of Service. The policy states that ElevenLabs prohibits use of its platform to generate voice content that impersonates real individuals without consent, produces non-consensual intimate content, spreads disinformation, facilitates fraud, or targets minors with harmful material; the terms further establish that ElevenLabs reserves the right to remove content, suspend accounts, and report violations to law enforcement. The policy includes a Voice Cloning consent requirement obligating users to obtain explicit permission from any individual whose voice is cloned, and operates a Content Safety team with automated and human review capabilities, which is operationally distinct from platforms that rely solely on automated moderation without a stated human review layer. The policy engages the EU AI Act (which classifies certain voice synthesis applications as high-risk or limited-risk AI systems with transparency obligations), the FTC Act (which prohibits unfair or deceptive practices including AI-generated impersonation used in fraud), and applicable state laws including the California Personality Rights Act and emerging state statutes on AI-generated deepfakes; enforcement applicability varies by jurisdiction and by how ElevenLabs classifies its services under each framework. Material compliance considerations include whether ElevenLabs' consent-for-voice-cloning mechanisms satisfy GDPR Article 9 requirements for biometric data processing in EU/EEA contexts, and whether its content moderation disclosures satisfy EU AI Act transparency obligations for AI-generated audio.
Institutional analysis available with Professional
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Start Professional free trialMonitoring
ElevenLabs has updated this document before.
Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
Professional Governance Intelligence
Need provision-level monitoring and regulatory mapping?
Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.
Start Professional free trialCross-platform context
See how other platforms handle Child Safety Prohibitions and similar clauses.
Compare across platforms →Governance Monitoring
Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.