TikTok prohibits 'coordinated inauthentic behavior' — including the use of fake accounts, bots, or coordinated networks to artificially amplify content, manipulate trending topics, or conduct influence operations — and can remove content and accounts associated with such activity.
This provision gives TikTok broad authority to remove accounts it suspects of coordinated manipulation, which could sweep in legitimate coordinated advocacy or grassroots organizing, particularly during political campaigns.
REGULATORY FRAMEWORK: The EU DSA (Art. 34-35) requires VLOPs to identify and mitigate systemic risks from information manipulation, including state-sponsored influence operations. The EU Code of Practice on Disinformation commits TikTok to disrupting advertising revenues of disinformation actors. In the US, the FTC has jurisdiction over deceptive practices involving fake reviews or artificial engagement under FTC Act Section 5 and 16 C.F.R. Part 255. The FEC regulates political advertising disclosure but does not directly regulate organic influence operations. FARA (22 U.S.C. § 611) may apply to foreign-directed influence operations.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.
TikTok's Community Guidelines grant the platform broad, largely discretionary authority to remove content and suspend or permanently ban accounts for violations ranging from explicit harms like child exploitation to broadly defined categories like 'misinformation' and 'harmful or dangerous acts,' which may affect creators and ordinary users alike. Users under 16 face additional content restrictions and feature limitations, and users under 13 are subject to a separate, more restrictive experience under COPPA compliance obligations. You can appeal content removals and account actions directly within the TikTok app by navigating to Settings, then Support, then Report a Problem.