It is completely prohibited to use ElevenLabs to create voice content that sexualizes children or could be used to harm or exploit minors in any way.
This analysis describes what ElevenLabs's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision establishes an absolute prohibition on using ElevenLabs for child sexual exploitation material or grooming, which carries serious criminal law implications in virtually all jurisdictions and triggers mandatory reporting obligations.
The policy establishes a zero-tolerance standard for any voice content involving the sexual exploitation or targeting of minors, and violations are among the categories the policy states may be referred to law enforcement.
Cross-platform context
See how other platforms handle Child Safety Prohibitions and similar clauses.
Compare across platforms →Monitoring
ElevenLabs has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"ElevenLabs strictly prohibits the use of its platform to generate any voice content that sexualizes, exploits, or targets minors for harm. This includes generating content that could be used to groom, abuse, or facilitate exploitation of children.— Excerpt from ElevenLabs's ElevenLabs Safety Policy
REGULATORY LANDSCAPE: This provision engages the federal PROTECT Act and 18 U.S.C. provisions criminalizing child sexual exploitation material (CSAM), including AI-generated CSAM; COPPA, which governs data collection from children under 13; and the UK Online Safety Act, which imposes obligations on platforms to prevent child sexual abuse material. The National Center for Missing and Exploited Children (NCMEC) CyberTipline is the designated reporting mechanism under US federal law for CSAM. The FTC and DOJ are relevant US federal enforcement authorities. GOVERNANCE EXPOSURE: High. Any platform providing generative AI capabilities has an elevated obligation to implement proactive detection and reporting mechanisms for child sexual exploitation content. The policy states this is prohibited but does not specify whether ElevenLabs has implemented NCMEC CyberTipline reporting workflows, which is a standard compliance expectation for covered platforms. JURISDICTION FLAGS: This prohibition applies globally, as CSAM is criminalized in virtually all jurisdictions. The UK Online Safety Act imposes specific proactive duty-of-care obligations. EU member states implement obligations under the EU Directive on combating sexual abuse and sexual exploitation of children. CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers deploying ElevenLabs in consumer-facing products accessible to minors should confirm in their vendor agreements that ElevenLabs maintains NCMEC CyberTipline reporting and proactive detection capabilities. COPPA compliance assessments are required for any deployment targeting or likely to attract users under 13. COMPLIANCE CONSIDERATIONS: Compliance teams should confirm ElevenLabs' CSAM detection and reporting infrastructure as part of vendor due diligence, particularly for platforms accessible to minors. Any enterprise deployment in educational or youth-facing contexts should include a COPPA assessment and parental consent workflow review.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision establishes an absolute prohibition on using ElevenLabs for child sexual exploitation material or grooming, which carries serious criminal law implications in virtually all jurisdictions and triggers mandatory reporting obligations.
The policy establishes a zero-tolerance standard for any voice content involving the sexual exploitation or targeting of minors, and violations are among the categories the policy states may be referred to law enforcement.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by ElevenLabs.