If you share sensitive information like your race, religion, or sexual orientation in chats with AI characters, Character.AI may collect and process that information, even though it advises you not to share it.
This analysis describes what Character.AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
Users of AI chat platforms commonly share personal details in the course of conversation, and this provision acknowledges that sensitive categories of data may be collected through those interactions, with the policy's primary protection being an advisory warning rather than a technical or contractual restriction on collection.
Sensitive personal details you share in AI character conversations, such as information about your health, beliefs, or sexuality, may be collected and processed by Character.AI, with the company's main safeguard being a recommendation not to share such information rather than a commitment not to process it.
Cross-platform context
See how other platforms handle Sensitive Personal Information Handling and similar clauses.
Compare across platforms →Monitoring
Character.AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Information you provide directly to us might also include information that may be treated as sensitive under applicable laws, such as login credentials, personal communications received or sent using our Services, or other information about yourself that you choose to provide when using our Services (e.g., if you voluntarily post user content revealing your race, religion, or sexual orientation). Sensitive personal information is not required or necessary to use the Services – please do not include any sensitive personal information in your interactions with us or the Services.— Excerpt from Character.AI's Character.ai Privacy Policy
REGULATORY LANDSCAPE: Sensitive personal information categories including race, religion, and sexual orientation are special category data under GDPR Article 9, requiring explicit consent or another enumerated basis for processing. Under CCPA, sensitive personal information has an expanded definition and specific opt-out rights. The FTC and EU data protection authorities are the primary enforcement bodies. The policy's acknowledgment that such data may be collected without articulating a specific processing basis or restriction in the base document creates regulatory exposure. GOVERNANCE EXPOSURE: High. Collecting special category data through an open-ended conversational AI interface without robust consent or processing restrictions represents significant GDPR Article 9 exposure. The policy's approach of discouraging rather than preventing sensitive data collection may be insufficient as a compliance mechanism under GDPR, which requires explicit consent or a documented enumerated basis for special category processing. JURISDICTION FLAGS: EU and UK users have the strongest protections, as GDPR Article 9 imposes strict requirements on special category data. California's CCPA defines sensitive personal information broadly and grants opt-out rights. Illinois, Colorado, Connecticut, and other US states with comprehensive privacy laws also have sensitive data provisions that may be engaged. Healthcare and mental health disclosures in chat may additionally engage HIPAA considerations if the platform is ever used in a clinical context, though this is not indicated by the document. CONTRACT AND VENDOR IMPLICATIONS: If sensitive personal data flows to advertising and analytics vendors disclosed in Section 3, those data processing relationships require heightened scrutiny under GDPR Article 9 and CCPA. Vendor contracts should specify restrictions on sensitive data use. The policy's disclosure that such data may be collected through normal use without a specific restriction mechanism may complicate vendor contractual warranties. COMPLIANCE CONSIDERATIONS: Compliance teams should assess whether the AI model training use of chat data includes processing pipelines that encounter special category data, and if so, document the GDPR Article 9 basis. Technical controls to detect and restrict special category data in training pipelines should be evaluated. The adequacy of an advisory warning as a substitute for affirmative consent or processing restrictions should be assessed against applicable law in key jurisdictions.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
Users of AI chat platforms commonly share personal details in the course of conversation, and this provision acknowledges that sensitive categories of data may be collected through those interactions, with the policy's primary protection being an advisory warning rather than a technical or contractual restriction on collection.
Sensitive personal details you share in AI character conversations, such as information about your health, beliefs, or sexuality, may be collected and processed by Character.AI, with the company's main safeguard being a recommendation not to share such information rather than a commitment not to process it.
ConductAtlas has identified this type of provision across 2 platforms. See the full comparison.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Character.AI.