Character.AI · Character.ai Community Guidelines · View original document ↗

Automated Moderation and Human Review Disclosure

Medium severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Character.AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Character.AI uses a combination of automated software tools and human reviewers to filter content, and its AI models themselves are built with content restrictions.

This analysis describes what Character.AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision discloses that human reviewers have access to user content and AI-generated outputs, which is relevant to user privacy expectations and may engage data protection obligations depending on what data is reviewed and retained.

Interpretive note: The document does not specify what data categories are accessible to human reviewers or what data retention practices apply to reviewed content, leaving the full privacy scope uncertain.

Consumer impact (what this means for users)

Users should be aware that their content and interactions may be reviewed by both automated systems and human moderators, meaning conversations on the platform are not treated as private in the context of safety enforcement.

Cross-platform context

See how other platforms handle Automated Moderation and Human Review Disclosure and similar clauses.

Compare across platforms →

Monitoring

Character.AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
These guidelines apply to all aspects of the Character.AI experience. Our systems filter illegal or harmful content through both automated moderation and human review, while our AI models are designed with filters and limits to prevent inappropriate outputs.

— Excerpt from Character.AI's Character.ai Community Guidelines

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: The disclosure of human review of user content engages GDPR and CCPA privacy frameworks, particularly regarding the lawful basis for processing user conversation data and the disclosure of that processing in the platform's privacy policy. The use of automated decision-making in content moderation may also engage GDPR Article 22 regarding automated processing with significant effects, depending on how moderation outcomes are characterized. GOVERNANCE EXPOSURE: Medium. The adequacy of privacy disclosures associated with human review of AI conversation content is a known area of regulatory scrutiny. If human reviewers access the substantive content of user conversations, data minimization and access control obligations under GDPR and CCPA are directly implicated. The document does not specify what data categories are accessible to human reviewers. JURISDICTION FLAGS: EU and UK users have heightened rights regarding automated processing and human review of personal data under GDPR and UK GDPR. California users have CCPA rights regarding the use of their data in moderation processes. If minor user conversations are reviewed by human moderators, COPPA obligations regarding data handling are additionally implicated. CONTRACT AND VENDOR IMPLICATIONS: The reference to contracted moderators and vendors in the Safety Center pages (disclosed in the document's embedded content) creates vendor management and data processing agreement obligations under GDPR for EU-facing operations. Legal teams should verify that data processing agreements are in place with all moderation vendors. COMPLIANCE CONSIDERATIONS: Compliance teams should assess whether the privacy policy adequately discloses the scope and basis for human review of user content, and whether data retention practices for content flagged during moderation are documented. Consent and transparency obligations under applicable frameworks should be reviewed against current disclosure practices.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over deceptive or unfair data practices including inadequate disclosure of human review of consumer communications
    File a complaint →

Provision details

Document information
Document
Character.ai Community Guidelines
Entity
Character.AI
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 11, 2026
Record ID
CA-P-010617
Document ID
CA-D-00780
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
ec0a9230a377aef5831a06c6ed9e3bbc7b54344595a80c04401a4ca4fe5a8d48
Analysis generated
May 11, 2026 12:24 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Character.AI
Document: Character.ai Community Guidelines
Record ID: CA-P-010617
Captured: 2026-05-11 12:24:11 UTC
SHA-256: ec0a9230a377aef5…
URL: https://conductatlas.com/platform/characterai/characterai-community-guidelines/automated-moderation-and-human-review-disclosure/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Character.AI's Automated Moderation and Human Review Disclosure clause do?

This provision discloses that human reviewers have access to user content and AI-generated outputs, which is relevant to user privacy expectations and may engage data protection obligations depending on what data is reviewed and retained.

How does this clause affect you?

Users should be aware that their content and interactions may be reviewed by both automated systems and human moderators, meaning conversations on the platform are not treated as private in the context of safety enforcement.

Is ConductAtlas affiliated with Character.AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Character.AI.