You cannot use ElevenLabs to create audio designed to trick people into thinking a real person said something they did not, spread false information, or interfere with elections.
This analysis describes what ElevenLabs's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision addresses one of the most socially significant risks of AI voice technology, specifically the generation of synthetic audio designed to manipulate public opinion or deceive audiences, and it has direct implications under emerging synthetic media election laws.
Interpretive note: The prohibition on content 'intended to deceive' is intent-based, and the practical standard for determining intent, as well as what constitutes harmful deception versus satire or parody, is not specified in the policy.
Users who generate AI audio intended to deceive audiences about speaker identity or spread false information, including political disinformation, are in violation of this provision and face account suspension or termination.
How other platforms handle this
Certain use cases, such as violence, hate speech, fraud, and privacy violations, are strictly prohibited. Developers must outline and get approval for their use case to access the Cohere API, understanding the models and limitations.
Don't claim to be human when directly and sincerely asked, use AI to deceive people about its fundamental nature, or impersonate real people or organizations in misleading ways.
Fraud and Deception. Attempting to defraud or misrepresent yourself or your services to others, including impersonating individuals or entities. Engaging in phishing, pharming, or other deceptive activities.
Monitoring
ElevenLabs has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"You may not use the Services to generate audio content intended to deceive listeners about the identity of the speaker, spread false information, interfere with elections or democratic processes, or otherwise mislead audiences in ways that could cause harm.— Excerpt from ElevenLabs's ElevenLabs Usage Policy
(1) REGULATORY LANDSCAPE: This provision engages the EU AI Act's prohibition on AI systems that deploy subliminal techniques or exploit vulnerabilities to distort behavior; FEC regulations and state election laws that increasingly address synthetic media in political advertising; the FTC Act's prohibition on deceptive commercial practices; and state-level political deepfake laws in California (AB 730, AB 2839), Texas, Michigan, and Minnesota. The EU's Digital Services Act also engages platform obligations around disinformation and systemic risk. (2) GOVERNANCE EXPOSURE: High. The prohibition on election interference content in particular engages a rapidly evolving regulatory landscape. The policy's language is broad, but its practical enforcement depends on ElevenLabs' ability to detect intent-based violations, which is operationally challenging and may not satisfy regulators seeking affirmative platform safeguards. (3) JURISDICTION FLAGS: Heightened exposure in the EU under the AI Act and DSA, in California under AB 2839 (requiring disclosure of AI-generated political content), and in any jurisdiction where synthetic media election laws apply. Federal election law exposure exists where content intersects with federal candidates or elections. (4) CONTRACT AND VENDOR IMPLICATIONS: Enterprise and API customers creating media production, news, or political communication tools using ElevenLabs should implement editorial controls and disclosure mechanisms for AI-generated audio. The policy places user responsibility for intent-based violations, but downstream platform liability may arise under applicable intermediary liability and election law frameworks. (5) COMPLIANCE CONSIDERATIONS: Compliance teams should assess whether AI-generated audio outputs from ElevenLabs integrations are subject to mandatory disclosure requirements under applicable state or EU law, and implement disclosure workflows accordingly. Political advertising use cases warrant specific legal review given the pace of legislative activity in this area.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision addresses one of the most socially significant risks of AI voice technology, specifically the generation of synthetic audio designed to manipulate public opinion or deceive audiences, and it has direct implications under emerging synthetic media election laws.
Users who generate AI audio intended to deceive audiences about speaker identity or spread false information, including political disinformation, are in violation of this provision and face account suspension or termination.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by ElevenLabs.