Character.AI prohibits creating characters or content that impersonates real people, whether public figures or private individuals, without their permission.
This analysis describes what Character.AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision addresses reputational, privacy, and intellectual property risks associated with AI-generated content depicting real people, and its reference to 'permissible contexts' leaves room for interpretive uncertainty about what uses are allowed.
Interpretive note: The document references 'permissible contexts' for use of name or likeness without defining them, leaving the practical scope of the prohibition ambiguous.
Users who create characters based on real people, including celebrities or public figures, may have that content removed if Character.AI determines it constitutes impersonation, and the document does not define what 'permissible contexts' means in practice.
How other platforms handle this
Don't claim to be human when directly and sincerely asked, use AI to deceive people about its fundamental nature, or impersonate real people or organizations in misleading ways.
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...
Monitoring
Character.AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Be Creative But Don't Impersonate: Don't impersonate public figures or private individuals, or use someone's name, likeness, or persona without permission or outside of permissible contexts.— Excerpt from Character.AI's Character.ai Community Guidelines
REGULATORY LANDSCAPE: Impersonation and use of likeness without consent engage state right of publicity laws (notably in California and New York), the FTC Act's prohibition on deceptive practices, and potentially the Lanham Act for trademark-related identity misuse. The FTC has also issued guidance on AI-generated impersonation in the context of consumer fraud. EU General Data Protection Regulation provisions on processing of personal data including biometric and identity-related information may also be relevant. GOVERNANCE EXPOSURE: Medium. The provision's reference to 'permissible contexts' without definition creates enforcement ambiguity. Right of publicity claims and defamation exposure associated with AI-generated impersonation content are an active area of legal development, and platforms hosting such content face potential secondary liability depending on jurisdiction and applicable Section 230 protections. JURISDICTION FLAGS: California's right of publicity statute (Civil Code Section 3344) and New York's analogous provisions create heightened exposure for use of name or likeness without consent. Illinois and other states have enacted or are considering AI-specific personality rights legislation. EU GDPR may apply to processing of personal data associated with real-person characters. CONTRACT AND VENDOR IMPLICATIONS: Legal teams should assess whether the platform's character creation tools include technical controls or review processes for content involving real-person names or likenesses, or whether enforcement is reactive and complaint-driven. COMPLIANCE CONSIDERATIONS: Compliance teams should evaluate whether the platform's DMCA and takedown processes extend to right of publicity and impersonation complaints, and whether affected individuals have a clear reporting pathway. The undefined scope of 'permissible contexts' should be reviewed for legal adequacy.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision addresses reputational, privacy, and intellectual property risks associated with AI-generated content depicting real people, and its reference to 'permissible contexts' leaves room for interpretive uncertainty about what uses are allowed.
Users who create characters based on real people, including celebrities or public figures, may have that content removed if Character.AI determines it constitutes impersonation, and the document does not define what 'permissible contexts' means in practice.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Character.AI.