You cannot use ElevenLabs to create fake audio recordings that spread false information, such as fabricating a statement by a real person that they never made.
This analysis describes what ElevenLabs's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
The policy specifically names disinformation as a prohibited use, which is relevant to elections, journalism, and public discourse contexts where synthetic audio could be used to fabricate statements by real individuals.
Interpretive note: The boundary between prohibited disinformation and permissible satire or clearly labeled fictional content is not explicitly defined in the policy, which may create interpretive ambiguity in edge cases.
This prohibition covers the creation of fabricated audio recordings presented as genuine statements by real people; individuals who encounter content they believe was created using ElevenLabs in violation of this provision may submit a report to ElevenLabs.
Cross-platform context
See how other platforms handle Prohibition on AI-Generated Disinformation and similar clauses.
Compare across platforms →Monitoring
ElevenLabs has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"ElevenLabs prohibits the use of its platform to create voice content designed to spread disinformation, including false statements of fact presented as genuine audio recordings of real individuals or fabricated news content.— Excerpt from ElevenLabs's ElevenLabs Safety Policy
REGULATORY LANDSCAPE: AI-generated disinformation engages FTC Act prohibitions on deceptive practices, the EU AI Act's requirements for labeling AI-generated content, and emerging electoral integrity statutes in multiple US states. The EU's Digital Services Act (DSA) imposes obligations on platforms to address the spread of disinformation at scale, which may apply to ElevenLabs depending on its user volume and service classification in the EU. The FTC and the European Commission (as DSA regulator) are relevant enforcement authorities. GOVERNANCE EXPOSURE: Medium. The prohibition is stated broadly but enforcement depends on ElevenLabs' content detection capabilities, which are described in the policy as involving both automated and human review. The operational gap between a policy prohibition and reliable detection of disinformation content at scale is a standard challenge for AI platform providers. JURISDICTION FLAGS: Several US states have enacted or proposed statutes specifically targeting AI-generated disinformation in electoral contexts, including California, Minnesota, and Texas. EU users are subject to the AI Act's transparency requirements for synthetic media. Heightened exposure exists for any use of ElevenLabs in political advertising or news media production. CONTRACT AND VENDOR IMPLICATIONS: Media companies, political campaigns, and PR firms using ElevenLabs should explicitly address disinformation risk in their vendor governance frameworks and ensure content review processes are in place prior to publication of any AI-generated audio. COMPLIANCE CONSIDERATIONS: Organizations producing AI-generated audio for public distribution should implement a disclosure and labeling workflow consistent with the EU AI Act's requirements for synthetic media and applicable state disclosure laws. Internal acceptable use policies should reference ElevenLabs' disinformation prohibition as a binding contractual constraint.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
The policy specifically names disinformation as a prohibited use, which is relevant to elections, journalism, and public discourse contexts where synthetic audio could be used to fabricate statements by real individuals.
This prohibition covers the creation of fabricated audio recordings presented as genuine statements by real people; individuals who encounter content they believe was created using ElevenLabs in violation of this provision may submit a report to ElevenLabs.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by ElevenLabs.