TikTok prohibits content that attacks or dehumanizes individuals or groups based on protected characteristics including race, ethnicity, religion, gender, sexual orientation, and disability — violating content can be removed and accounts can be banned.
While hate speech prohibitions are standard on major platforms, the breadth of TikTok's protected categories and the subjectivity of 'dehumanization' determinations create enforcement inconsistency risks and potential for both under- and over-enforcement.
REGULATORY FRAMEWORK: The EU Digital Services Act (Art. 34) requires VLOP risk assessments for illegal hate speech, and EU member state hate speech laws (including the EU Framework Decision 2008/913/JHA) impose criminal liability for incitement to hatred. Germany's NetzDG requires removal of clearly illegal hate speech within 24 hours (extended to 7 days for complex cases) with fines up to €50M. In the US, there is no federal hate speech law; TikTok's policy exceeds legal requirements but is protected under Section 230. The UK's Online Safety Act 2023 (s.12-15) requires risk assessments for hate speech as priority illegal content.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.
TikTok's Community Guidelines grant the platform broad, largely discretionary authority to remove content and suspend or permanently ban accounts for violations ranging from explicit harms like child exploitation to broadly defined categories like 'misinformation' and 'harmful or dangerous acts,' which may affect creators and ordinary users alike. Users under 16 face additional content restrictions and feature limitations, and users under 13 are subject to a separate, more restrictive experience under COPPA compliance obligations. You can appeal content removals and account actions directly within the TikTok app by navigating to Settings, then Support, then Report a Problem.