YouTube · YouTube Community Guidelines

Automated and Human Content Detection System

Medium severity
Share 𝕏 Share in Share 🔒 PDF

What it is

YouTube uses a combination of AI systems and human reviewers to find and remove videos that break its rules, with most content caught automatically before many people see it.

Consumer impact (what this means for users)

Creators' videos can be automatically removed or restricted by AI systems before human review occurs, potentially impacting their audience reach and revenue without immediate recourse.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Dispute a Fee
    If you believe your content was incorrectly removed by automated systems, submit an appeal through the YouTube appeals process linked in the removal notification or at the support page above.

Cross-platform context

See how other platforms handle Automated and Human Content Detection System and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

Automated detection systems can produce false positives, meaning legitimate content — including news, education, or commentary — may be removed or suppressed without prior human review.

View original clause language
Content that violates our community guidelines is flagged by a mix of automated detection and human reporting — most is automatically detected — and we go to great lengths to make sure violative content isn't widely viewed, or even viewed at all, before it's taken down.

Institutional analysis (Compliance & legal intelligence)

REGULATORY FRAMEWORK: This provision implicates the EU AI Act (Regulation (EU) 2024/1689), particularly provisions on high-risk AI systems used in content moderation affecting individuals' economic interests (Annex III); EU DSA Articles 15 and 42 requiring transparency reporting on automated content moderation; and GDPR Article 22 on automated individual decision-making with significant effects. In the US, FTC Act Section 5 applies where automated systems produce systematically biased or inaccurate outcomes. Primary enforcement authorities: European Commission, national Data Protection Authorities (DPAs), and FTC.

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    The FTC has authority to investigate unfair or deceptive practices arising from systematic errors in automated content moderation under FTC Act Section 5.
    File a complaint →

Provision details

Document information
Document
YouTube Community Guidelines
Entity
YouTube
Document last updated
April 29, 2026
Tracking information
First tracked
April 27, 2026
Last verified
April 27, 2026
Record ID
CA-P-003388
Document ID
CA-D-00116
Evidence Provenance
Source URL
Wayback Machine
SHA-256
5b66a5a7dce893613dee25b2888c323e46e2ef66abb62d974276d5f8a251f8da
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: YouTube | Document: YouTube Community Guidelines | Record: CA-P-003388
Captured: 2026-04-27 12:37:48 UTC | SHA-256: 5b66a5a7dce89361…
URL: https://conductatlas.com/platform/youtube/youtube-community-guidelines/automated-and-human-content-detection-system/
Accessed: May 2, 2026
Classification
Severity
Medium
Categories

Other provisions in this document