Cohere prohibits using its AI to create disinformation campaigns, fake social media personas, or content designed to manipulate public opinion through deceptive means.
Consumers benefit from this prohibition because it prevents Cohere's technology from being weaponized to manipulate their political beliefs or deceive them through AI-generated fake news or personas.
Cross-platform context
See how other platforms handle Prohibition on Disinformation and Influence Operations and similar clauses.
Compare across platforms →This provision addresses one of the highest-profile AI misuse risks — foreign and domestic influence operations — and aligns Cohere with regulatory expectations under the EU DSA and emerging US electoral integrity frameworks.
(1) REGULATORY FRAMEWORK: This prohibition engages the EU Digital Services Act (DSA) Art. 34 systemic risk assessment obligations for very large online platforms; EU AI Act Art. 50 transparency requirements for AI-generated content; FEC regulations on AI-generated political advertising (11 CFR); and FTC Act Section 5 on deceptive practices, enforced by FTC and FEC respectively. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.