TikTok prohibits content it determines to be misinformation — including false information about elections, public health, and emergencies — and can remove such content or reduce its distribution even if it does not rise to a full removal threshold.
The misinformation category is broadly and subjectively defined, creating risk that legitimate speech — including satire, opinion, or contested factual claims — may be removed or suppressed, with limited recourse for affected users.
REGULATORY FRAMEWORK: In the EU, the DSA (Regulation 2022/2065, Art. 34-35) requires Very Large Online Platforms to conduct annual systemic risk assessments for information integrity risks, including misinformation, and implement mitigation measures subject to independent audit. The EU Code of Practice on Disinformation (2022) is a co-regulatory instrument TikTok has signed. In the US, there is no federal misinformation law, but the FTC Act Section 5 could apply if moderation practices are found to be deceptive. The First Amendment constrains government but not private platform action under current US law.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.
TikTok's Community Guidelines grant the platform broad, largely discretionary authority to remove content and suspend or permanently ban accounts for violations ranging from explicit harms like child exploitation to broadly defined categories like 'misinformation' and 'harmful or dangerous acts,' which may affect creators and ordinary users alike. Users under 16 face additional content restrictions and feature limitations, and users under 13 are subject to a separate, more restrictive experience under COPPA compliance obligations. You can appeal content removals and account actions directly within the TikTok app by navigating to Settings, then Support, then Report a Problem.