ElevenLabs · ElevenLabs Usage Policy · View original document ↗

Disinformation and Deceptive Synthetic Media Prohibition

High severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for ElevenLabs Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

You cannot use ElevenLabs to create audio designed to trick people into thinking a real person said something they did not, spread false information, or interfere with elections.

This analysis describes what ElevenLabs's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision addresses one of the most socially significant risks of AI voice technology, specifically the generation of synthetic audio designed to manipulate public opinion or deceive audiences, and it has direct implications under emerging synthetic media election laws.

Interpretive note: The prohibition on content 'intended to deceive' is intent-based, and the practical standard for determining intent, as well as what constitutes harmful deception versus satire or parody, is not specified in the policy.

Consumer impact (what this means for users)

Users who generate AI audio intended to deceive audiences about speaker identity or spread false information, including political disinformation, are in violation of this provision and face account suspension or termination.

How other platforms handle this

Cohere Medium

Certain use cases, such as violence, hate speech, fraud, and privacy violations, are strictly prohibited. Developers must outline and get approval for their use case to access the Cohere API, understanding the models and limitations.

OpenAI Medium

Don't claim to be human when directly and sincerely asked, use AI to deceive people about its fundamental nature, or impersonate real people or organizations in misleading ways.

Amazon Medium

Fraud and Deception. Attempting to defraud or misrepresent yourself or your services to others, including impersonating individuals or entities. Engaging in phishing, pharming, or other deceptive activities.

See all platforms with this clause type →

Monitoring

ElevenLabs has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
You may not use the Services to generate audio content intended to deceive listeners about the identity of the speaker, spread false information, interfere with elections or democratic processes, or otherwise mislead audiences in ways that could cause harm.

— Excerpt from ElevenLabs's ElevenLabs Usage Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: This provision engages the EU AI Act's prohibition on AI systems that deploy subliminal techniques or exploit vulnerabilities to distort behavior; FEC regulations and state election laws that increasingly address synthetic media in political advertising; the FTC Act's prohibition on deceptive commercial practices; and state-level political deepfake laws in California (AB 730, AB 2839), Texas, Michigan, and Minnesota. The EU's Digital Services Act also engages platform obligations around disinformation and systemic risk. (2) GOVERNANCE EXPOSURE: High. The prohibition on election interference content in particular engages a rapidly evolving regulatory landscape. The policy's language is broad, but its practical enforcement depends on ElevenLabs' ability to detect intent-based violations, which is operationally challenging and may not satisfy regulators seeking affirmative platform safeguards. (3) JURISDICTION FLAGS: Heightened exposure in the EU under the AI Act and DSA, in California under AB 2839 (requiring disclosure of AI-generated political content), and in any jurisdiction where synthetic media election laws apply. Federal election law exposure exists where content intersects with federal candidates or elections. (4) CONTRACT AND VENDOR IMPLICATIONS: Enterprise and API customers creating media production, news, or political communication tools using ElevenLabs should implement editorial controls and disclosure mechanisms for AI-generated audio. The policy places user responsibility for intent-based violations, but downstream platform liability may arise under applicable intermediary liability and election law frameworks. (5) COMPLIANCE CONSIDERATIONS: Compliance teams should assess whether AI-generated audio outputs from ElevenLabs integrations are subject to mandatory disclosure requirements under applicable state or EU law, and implement disclosure workflows accordingly. Political advertising use cases warrant specific legal review given the pace of legislative activity in this area.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over deceptive practices in commercial contexts, including deceptive AI-generated audio used in advertising or consumer communications.
    File a complaint →
  • State AG
    State attorneys general have enforcement authority under state-level political deepfake and consumer protection laws in multiple jurisdictions.
    File a complaint →

Applicable regulations

CFAA
United States Federal
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
ElevenLabs Usage Policy
Entity
ElevenLabs
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 11, 2026
Record ID
CA-P-010710
Document ID
CA-D-00779
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
3b04c061ee875cc733cfece1b436238b97a43b0e5ec22aaacc3176c33d57981a
Analysis generated
May 11, 2026 13:18 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: ElevenLabs
Document: ElevenLabs Usage Policy
Record ID: CA-P-010710
Captured: 2026-05-11 13:18:12 UTC
SHA-256: 3b04c061ee875cc7…
URL: https://conductatlas.com/platform/elevenlabs/elevenlabs-usage-policy/disinformation-and-deceptive-synthetic-media-prohibition/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does ElevenLabs's Disinformation and Deceptive Synthetic Media Prohibition clause do?

This provision addresses one of the most socially significant risks of AI voice technology, specifically the generation of synthetic audio designed to manipulate public opinion or deceive audiences, and it has direct implications under emerging synthetic media election laws.

How does this clause affect you?

Users who generate AI audio intended to deceive audiences about speaker identity or spread false information, including political disinformation, are in violation of this provision and face account suspension or termination.

Is ConductAtlas affiliated with ElevenLabs?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by ElevenLabs.