Character.AI · Character.ai Community Guidelines · View original document ↗

Wellbeing and Self-Harm Content Prohibition

High severity High confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Character.AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Character.AI prohibits content that promotes, glorifies, or encourages self-harm, suicide, or eating disorders, including extreme fitness content and body shaming.

This analysis describes what Character.AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision has direct relevance to user safety, particularly for minor users, and engages ongoing public and regulatory scrutiny of social and AI platforms regarding mental health content moderation obligations.

Consumer impact (what this means for users)

Users who engage with or create content in these categories may have content removed, and users in distress should be aware that the platform's safety guidelines direct them away from seeking mental health support through AI characters rather than qualified professionals.

How other platforms handle this

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

Midjourney Medium

Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...

Delta Airlines Medium

All content on this Internet site ("the delta.com website") is owned or controlled by Delta Air Lines and is protected by worldwide copyright laws.

See all platforms with this clause type →

Monitoring

Character.AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Support Wellbeing: Be mindful when discussing sensitive topics like self-harm, eating disorders, or suicide. Promotion, glorification, or encouragement of these topics is prohibited, as they can be triggering or pose serious safety risks. This includes extreme fitness content and body shaming that could promote eating disorders.

— Excerpt from Character.AI's Character.ai Community Guidelines

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: Mental health and self-harm content moderation engages the FTC Act, COPPA (for minor users), and emerging state-level legislation targeting harmful social media content directed at minors, including California's AB 2408 framework. The Surgeon General's advisory on social media and youth mental health has increased regulatory attention on this category. The EU Digital Services Act imposes risk assessment and mitigation obligations for systemic risks including mental health impacts on minors. GOVERNANCE EXPOSURE: High. Platform liability for AI-generated content contributing to user self-harm is an active area of litigation and regulatory scrutiny. The prohibition is stated as a user conduct rule but the platform's AI models generate content in response to user prompts, creating a parallel obligation to ensure model outputs do not promote self-harm even when users attempt to elicit such content. JURISDICTION FLAGS: California, New York, and several other states have enacted or are considering legislation that would impose specific obligations on platforms regarding harmful content directed at minors, including self-harm and eating disorder content. EU DSA obligations for very large online platforms include mandatory risk assessments for mental health impacts. CONTRACT AND VENDOR IMPLICATIONS: Organizations using Character.AI in healthcare, educational, or youth-serving contexts should verify that the platform's self-harm content controls satisfy applicable duty of care obligations and sector-specific regulations. COMPLIANCE CONSIDERATIONS: Compliance teams should assess whether the platform's crisis intervention protocols (such as routing users seeking self-harm information to emergency resources) are documented and whether they meet evolving regulatory standards for online mental health safety.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over unfair or deceptive practices by platforms affecting consumer safety, including inadequate moderation of content harmful to minors
    File a complaint →
  • State AG
    State Attorneys General have brought and may bring consumer protection actions against platforms over inadequate protection of minors from harmful mental health content
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal

Provision details

Document information
Document
Character.ai Community Guidelines
Entity
Character.AI
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 11, 2026
Record ID
CA-P-010618
Document ID
CA-D-00780
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
ec0a9230a377aef5831a06c6ed9e3bbc7b54344595a80c04401a4ca4fe5a8d48
Analysis generated
May 11, 2026 12:24 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Character.AI
Document: Character.ai Community Guidelines
Record ID: CA-P-010618
Captured: 2026-05-11 12:24:11 UTC
SHA-256: ec0a9230a377aef5…
URL: https://conductatlas.com/platform/characterai/characterai-community-guidelines/wellbeing-and-self-harm-content-prohibition/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Character.AI's Wellbeing and Self-Harm Content Prohibition clause do?

This provision has direct relevance to user safety, particularly for minor users, and engages ongoing public and regulatory scrutiny of social and AI platforms regarding mental health content moderation obligations.

How does this clause affect you?

Users who engage with or create content in these categories may have content removed, and users in distress should be aware that the platform's safety guidelines direct them away from seeking mental health support through AI characters rather than qualified professionals.

Is ConductAtlas affiliated with Character.AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Character.AI.