You cannot use ElevenLabs to create fake voice recordings to commit fraud, steal someone's identity, or trick people or systems for financial benefit.
This analysis describes what ElevenLabs's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
AI voice synthesis has been used in documented fraud cases including CEO impersonation scams and telephone authentication bypasses; this provision addresses that risk directly and signals ElevenLabs' position on law enforcement cooperation.
The policy prohibits using ElevenLabs' voice tools for financial fraud or identity theft, and states that violations may be referred to relevant authorities; consumers who are victims of voice-based fraud should also report to relevant law enforcement agencies.
Cross-platform context
See how other platforms handle Prohibition on Use for Fraud and Financial Crime and similar clauses.
Compare across platforms →Monitoring
ElevenLabs has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Users may not use ElevenLabs' platform to generate voice content for the purpose of committing fraud, including financial fraud, identity theft, or unauthorized impersonation for financial gain.— Excerpt from ElevenLabs's ElevenLabs Safety Policy
REGULATORY LANDSCAPE: Fraud and financial crime via AI voice synthesis engages the federal wire fraud statute (18 U.S.C. 1343), identity theft statutes (18 U.S.C. 1028), and the FTC Act's prohibition on deceptive practices. The FTC has issued guidance on AI impersonation in fraud contexts. Financial institutions using voice authentication may have independent obligations under the Gramm-Leach-Bliley Act and applicable FFIEC guidance to assess AI-based authentication vulnerabilities. GOVERNANCE EXPOSURE: Medium. While the prohibition is clear, ElevenLabs' ability to detect and prevent real-time use of its platform for fraud is limited by the nature of downstream use after audio is generated. Enterprise customers in financial services, insurance, or authentication-sensitive industries should assess vendor risk accordingly. JURISDICTION FLAGS: Financial fraud via AI voice synthesis has been documented and prosecuted at the federal level in the US. UK users are subject to the Fraud Act 2006. EU users may have recourse under national criminal law and the EU's revised payment services framework. CONTRACT AND VENDOR IMPLICATIONS: Financial services firms using ElevenLabs should assess whether their vendor risk management frameworks cover the downstream fraud risk associated with AI-generated voice content. Third-party risk assessments should include ElevenLabs' fraud detection and reporting capabilities. COMPLIANCE CONSIDERATIONS: Legal teams should confirm that ElevenLabs' law enforcement referral process for fraud violations is operationalized and that enterprise contracts include appropriate representations regarding platform use restrictions.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
AI voice synthesis has been used in documented fraud cases including CEO impersonation scams and telephone authentication bypasses; this provision addresses that risk directly and signals ElevenLabs' position on law enforcement cooperation.
The policy prohibits using ElevenLabs' voice tools for financial fraud or identity theft, and states that violations may be referred to relevant authorities; consumers who are victims of voice-based fraud should also report to relevant law enforcement agencies.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by ElevenLabs.