ElevenLabs · ElevenLabs Safety Policy · View original document ↗

Prohibition on AI-Generated Disinformation

Medium severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for ElevenLabs Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

You cannot use ElevenLabs to create fake audio recordings that spread false information, such as fabricating a statement by a real person that they never made.

This analysis describes what ElevenLabs's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

The policy specifically names disinformation as a prohibited use, which is relevant to elections, journalism, and public discourse contexts where synthetic audio could be used to fabricate statements by real individuals.

Interpretive note: The boundary between prohibited disinformation and permissible satire or clearly labeled fictional content is not explicitly defined in the policy, which may create interpretive ambiguity in edge cases.

Consumer impact (what this means for users)

This prohibition covers the creation of fabricated audio recordings presented as genuine statements by real people; individuals who encounter content they believe was created using ElevenLabs in violation of this provision may submit a report to ElevenLabs.

Cross-platform context

See how other platforms handle Prohibition on AI-Generated Disinformation and similar clauses.

Compare across platforms →

Monitoring

ElevenLabs has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
ElevenLabs prohibits the use of its platform to create voice content designed to spread disinformation, including false statements of fact presented as genuine audio recordings of real individuals or fabricated news content.

— Excerpt from ElevenLabs's ElevenLabs Safety Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: AI-generated disinformation engages FTC Act prohibitions on deceptive practices, the EU AI Act's requirements for labeling AI-generated content, and emerging electoral integrity statutes in multiple US states. The EU's Digital Services Act (DSA) imposes obligations on platforms to address the spread of disinformation at scale, which may apply to ElevenLabs depending on its user volume and service classification in the EU. The FTC and the European Commission (as DSA regulator) are relevant enforcement authorities. GOVERNANCE EXPOSURE: Medium. The prohibition is stated broadly but enforcement depends on ElevenLabs' content detection capabilities, which are described in the policy as involving both automated and human review. The operational gap between a policy prohibition and reliable detection of disinformation content at scale is a standard challenge for AI platform providers. JURISDICTION FLAGS: Several US states have enacted or proposed statutes specifically targeting AI-generated disinformation in electoral contexts, including California, Minnesota, and Texas. EU users are subject to the AI Act's transparency requirements for synthetic media. Heightened exposure exists for any use of ElevenLabs in political advertising or news media production. CONTRACT AND VENDOR IMPLICATIONS: Media companies, political campaigns, and PR firms using ElevenLabs should explicitly address disinformation risk in their vendor governance frameworks and ensure content review processes are in place prior to publication of any AI-generated audio. COMPLIANCE CONSIDERATIONS: Organizations producing AI-generated audio for public distribution should implement a disclosure and labeling workflow consistent with the EU AI Act's requirements for synthetic media and applicable state disclosure laws. Internal acceptable use policies should reference ElevenLabs' disinformation prohibition as a binding contractual constraint.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over deceptive practices, including AI-generated disinformation used in commerce or that harms consumers
    File a complaint →
  • State AG
    State attorneys general have authority over electoral disinformation and consumer protection violations under applicable state statutes
    File a complaint →

Provision details

Document information
Document
ElevenLabs Safety Policy
Entity
ElevenLabs
Document last updated
May 12, 2026
Tracking information
First tracked
May 12, 2026
Last verified
May 12, 2026
Record ID
CA-P-012012
Document ID
CA-D-00833
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
b0b41cc06f252ab010e962f89a076fb511fcaecb58e9679d339728b7264dae47
Analysis generated
May 12, 2026 17:04 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: ElevenLabs
Document: ElevenLabs Safety Policy
Record ID: CA-P-012012
Captured: 2026-05-12 17:04:27 UTC
SHA-256: b0b41cc06f252ab0…
URL: https://conductatlas.com/platform/elevenlabs/elevenlabs-safety-policy/prohibition-on-ai-generated-disinformation/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does ElevenLabs's Prohibition on AI-Generated Disinformation clause do?

The policy specifically names disinformation as a prohibited use, which is relevant to elections, journalism, and public discourse contexts where synthetic audio could be used to fabricate statements by real individuals.

How does this clause affect you?

This prohibition covers the creation of fabricated audio recordings presented as genuine statements by real people; individuals who encounter content they believe was created using ElevenLabs in violation of this provision may submit a report to ElevenLabs.

Is ConductAtlas affiliated with ElevenLabs?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by ElevenLabs.