Character.AI uses your chat conversations, voice recordings, and other interaction data to train its AI systems and develop new AI features.
This analysis describes what Character.AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
Users engaging in potentially personal or sensitive conversations with AI characters may not fully appreciate that their messages and voice inputs can become training material for commercial AI models.
Your private chat messages and voice recordings may be used to train Character.AI's AI models, meaning the content of your conversations has a use beyond your immediate interaction with the platform.
How other platforms handle this
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
Writer does not use Customer Data to train its AI models without explicit customer permission. Customer Data means the data, content, and information that customers and their end users submit to or through the Services.
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
Monitoring
Character.AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Analyze, maintain, improve, modify, customize, and measure the Services, including to train our artificial intelligence/machine learning models; Develop new features, algorithms and machine learning models, programs, and services.— Excerpt from Character.AI's Character.ai Privacy Policy
REGULATORY LANDSCAPE: This provision implicates GDPR Articles 6 and 13 (lawful basis and transparency for processing personal data for AI training), GDPR Article 9 where chat content reveals special category data, CCPA's sensitive personal information provisions for voice data and inferred data, and the EU AI Act insofar as training data governance requirements apply to foundation model developers. The FTC and EU data protection authorities are the primary enforcement bodies. The policy does not specify a GDPR lawful basis for AI training use in the base document, deferring to Regional Privacy Disclosures, which may not satisfy standalone transparency obligations under Article 13. GOVERNANCE EXPOSURE: High. The use of personal data including voice recordings and chat communications for AI model training without an explicitly stated lawful basis in the base policy creates material regulatory exposure under GDPR and UK GDPR. Regulators in the EU have scrutinized AI training data practices across multiple platforms, and the inclusion of voice data raises additional sensitivity given its potential treatment as biometric data under certain frameworks. JURISDICTION FLAGS: EU and UK users face heightened exposure given GDPR and UK GDPR requirements for explicit lawful basis documentation and data subject rights around automated processing. California users have CCPA rights regarding sensitive personal information including voice data. Illinois users may have claims under BIPA if voice data is processed in ways that constitute biometric identifier collection. Minor users globally face additional protections under COPPA and the UK Age Appropriate Design Code. CONTRACT AND VENDOR IMPLICATIONS: Vendors and service providers receiving data for model training purposes must be assessed under GDPR Article 28 data processing agreements. If third-party AI infrastructure providers receive raw training data, their sub-processor status and contractual obligations require verification. This provision may also affect enterprise or API customers who integrate Character.AI into their own services. COMPLIANCE CONSIDERATIONS: Compliance teams should document the specific lawful basis claimed for AI training in the Regional Privacy Disclosures and assess whether it aligns with the base policy language. A data mapping exercise should identify which data categories flow into training pipelines, with particular attention to voice data and chat content containing sensitive disclosures. Consent mechanisms for model training opt-out, referenced separately in the policy's navigation as 'About our Model Training,' should be audited for accessibility and effectiveness.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
Users engaging in potentially personal or sensitive conversations with AI characters may not fully appreciate that their messages and voice inputs can become training material for commercial AI models.
Your private chat messages and voice recordings may be used to train Character.AI's AI models, meaning the content of your conversations has a use beyond your immediate interaction with the platform.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Character.AI.