Character.AI prohibits content that promotes or depicts real-world violence, torture, animal abuse, terrorism, or extremist ideologies, while still allowing fictional storytelling in these genres.
This analysis describes what Character.AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
The provision draws a distinction between fictional narrative exploration and real-world promotion of harmful content, but does not define where that line falls operationally, leaving enforcement to Character.AI's moderation discretion.
Interpretive note: The boundary between permissible fictional storytelling and prohibited promotion of violence or extremism is not defined in the document and is left to platform enforcement discretion.
Users engaged in fictional storytelling involving violent or dark themes should be aware that the platform retains discretion to determine whether specific content crosses from fictional exploration into prohibited promotion, which could result in content removal or account action.
How other platforms handle this
You agree that you will not: post, upload, transmit, or otherwise make available through the Twitch Services any content that is libelous, defamatory, obscene, pornographic, abusive, harassing, threatening, hateful, objectionable with respect to race, religion, gender, sexual orientation, national o...
You may not use the Venmo services for any illegal purpose, to send money to any person or organization on a government sanctions list, for gambling, for purchasing or selling illegal goods or services, or for any activity that violates applicable law. You may not use Venmo for commercial transactio...
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Monitoring
Character.AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Support a Safe Environment: Focus on creating and interacting with content that uplifts, entertains, or educates. Bold storytelling is encouraged, but content that harms, intimidates, or endangers others — especially minors — is prohibited. This includes any promotion or depiction of real-world violence, torture, gore, animal abuse, terrorism, or extremist ideologies.— Excerpt from Character.AI's Character.ai Community Guidelines
REGULATORY LANDSCAPE: Terrorism and extremist content prohibitions engage the FTC Act's unfair and deceptive practices framework and may interact with the Terrorist Content Analytics Platform (TCAP) and applicable federal counter-terrorism statutes. The EU's Terrorist Content Online Regulation (TCO Regulation) requires rapid removal of terrorist content and imposes obligations on hosting service providers. The UK Online Safety Act includes similar provisions. GOVERNANCE EXPOSURE: Medium. The distinction between permissible fictional storytelling and prohibited real-world promotion is operationally ambiguous and likely enforced through a combination of automated classifiers and human review. The absence of clear definitional criteria in the policy creates moderation discretion that could result in inconsistent enforcement. JURISDICTION FLAGS: EU platforms face mandatory one-hour removal requirements for terrorist content under the TCO Regulation, which may be relevant if Character.AI serves EU users. UK Online Safety Act obligations regarding illegal content apply to services with UK users. CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers in media, education, or research contexts should assess whether their use cases involving fictional violence or extremism research could trigger enforcement action under this provision's broadly drafted prohibition. COMPLIANCE CONSIDERATIONS: Compliance teams should evaluate whether automated classifiers are calibrated to distinguish fictional from promotional content at the precision required by applicable law, and whether appeals or review processes exist for contested moderation decisions in this category.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
The provision draws a distinction between fictional narrative exploration and real-world promotion of harmful content, but does not define where that line falls operationally, leaving enforcement to Character.AI's moderation discretion.
Users engaged in fictional storytelling involving violent or dark themes should be aware that the platform retains discretion to determine whether specific content crosses from fictional exploration into prohibited promotion, which could result in content removal or account action.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Character.AI.