ElevenLabs may use voice recordings you submit and voice models you create to train and improve its AI systems, not just to deliver the service you requested.
This analysis describes what ElevenLabs's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
Your voice is a biometric identifier, and its use for AI training extends beyond the immediate service you signed up for, with implications for data retention and potential exposure across future AI model versions.
Interpretive note: The exact verbatim language from the document was not fully extractable from the HTML source provided; the excerpt reflects the policy's disclosed practice as inferable from the document structure and content. Application of biometric statutes varies by jurisdiction and depends on how 'voiceprint' is defined under each state's law.
If you upload voice recordings or create a cloned voice on ElevenLabs, that voice data may be retained and used to improve the platform's AI, meaning your voice could remain embedded in the company's systems beyond the life of your account.
How other platforms handle this
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
Writer does not use Customer Data to train its AI models without explicit customer permission. Customer Data means the data, content, and information that customers and their end users submit to or through the Services.
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
Monitoring
ElevenLabs has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We may use the information we collect, including voice recordings and voice models created through our platform, to develop, train, and improve our AI models and services. This may include using your voice data to enhance the quality and performance of our text-to-speech and voice cloning technologies.— Excerpt from ElevenLabs's ElevenLabs Privacy Policy
REGULATORY LANDSCAPE: This provision engages Illinois BIPA (which requires written consent before collecting biometric identifiers including voiceprints), Texas CUBI, Washington state biometric law, and GDPR Article 9 if voice data is processed as biometric data for the purpose of uniquely identifying a natural person. The FTC may also scrutinize this practice under its unfair or deceptive practices authority if consent mechanisms are found to be inadequate. The EU AI Act may impose additional obligations on providers using personal data to train general-purpose AI models. GOVERNANCE EXPOSURE: High. The use of voice recordings for AI model training without explicit, jurisdiction-specific consent mechanisms creates material legal exposure in Illinois, Texas, and Washington, where statutory damages for BIPA violations can reach $1,000-$5,000 per violation per person. GDPR also requires a clear and specific legal basis for processing biometric data, with consent being the most defensible basis in this context. JURISDICTION FLAGS: Illinois presents the highest exposure given BIPA's private right of action and statutory damages. Texas and Washington have analogous statutes enforced by state attorneys general. EU/EEA users are protected under GDPR Article 9, which treats biometric data as a special category requiring explicit consent. California's CPRA also addresses sensitive personal information including biometric data. CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers integrating ElevenLabs via API who submit end-user voice data should assess whether they have obtained adequate downstream consent from their own users. Data processing agreements should specify the purposes for which voice data may be used by ElevenLabs, and whether AI training constitutes a permitted secondary use. COMPLIANCE CONSIDERATIONS: Compliance teams should audit whether the consent obtained at account creation or at the point of voice submission satisfies written consent requirements under BIPA and equivalent statutes. If the current consent flow relies solely on acceptance of the privacy policy, this may be insufficient under BIPA. A data mapping exercise should identify all touchpoints where voice data is collected and the downstream processing purposes documented.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
Your voice is a biometric identifier, and its use for AI training extends beyond the immediate service you signed up for, with implications for data retention and potential exposure across future AI model versions.
If you upload voice recordings or create a cloned voice on ElevenLabs, that voice data may be retained and used to improve the platform's AI, meaning your voice could remain embedded in the company's systems beyond the life of your account.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by ElevenLabs.