Developers cannot use Cohere's models to create disinformation campaigns or conduct influence operations, including generating false narratives, fabricated content, or coordinated inauthentic behavior.
This analysis describes what Cohere's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This prohibition covers a category of AI misuse that has attracted significant regulatory and legislative attention globally; developers who build such capabilities on Cohere's infrastructure face both policy breach and increasing legal exposure as AI-generated disinformation legislation develops.
Interpretive note: The terms 'influence operations' and 'disinformation' are not defined in the document, creating interpretive ambiguity for edge cases involving persuasive content, satire, or political advertising.
This provision protects the general public from being targeted by AI-generated disinformation or influence operations produced using Cohere's models, directly addressing the safety of individuals who consume information online.
How other platforms handle this
OpenAI prohibits use of its services to build AI personas to conduct covert influence operations, generating content designed for political propaganda or astroturfing campaigns, creating fake social media profiles, and generating content that falsely portrays real people.
Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Monitoring
Cohere has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Certain use cases, such as violence, hate speech, fraud, and privacy violations, are strictly prohibited. [The policy identifies influence operations and disinformation as prohibited use categories.]— Excerpt from Cohere's Cohere Usage Policy
(1) REGULATORY LANDSCAPE: AI-generated disinformation and influence operations are an emerging area of regulation. The EU AI Act and the EU Digital Services Act address certain manipulation and disinformation risks associated with AI systems. In the US, the FTC has authority over deceptive practices including AI-generated false advertising or impersonation. Several US states have enacted or proposed legislation specifically targeting AI-generated political disinformation. (2) GOVERNANCE EXPOSURE: Medium. The prohibition is categorical but the definition of 'influence operations' and 'disinformation' is not specified in the document, creating interpretive ambiguity for edge cases such as persuasive marketing content, political advertising, or satirical content. (3) JURISDICTION FLAGS: EU deployments face the highest regulatory exposure given the Digital Services Act's requirements for very large online platforms and the EU AI Act's provisions on AI-generated manipulation. US developers in political advertising or public communications contexts should monitor rapidly evolving state-level AI disclosure legislation. (4) CONTRACT AND VENDOR IMPLICATIONS: Developers building content generation or social media management tools should specifically review whether their product's capabilities could be weaponized for influence operations and include appropriate use restrictions in their own end-user agreements. (5) COMPLIANCE CONSIDERATIONS: Legal teams should assess whether any proposed use case involves persuasive content generation at scale, synthetic persona creation, or coordinated content distribution, and review these against the prohibition's scope before deployment.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This prohibition covers a category of AI misuse that has attracted significant regulatory and legislative attention globally; developers who build such capabilities on Cohere's infrastructure face both policy breach and increasing legal exposure as AI-generated disinformation legislation develops.
This provision protects the general public from being targeted by AI-generated disinformation or influence operations produced using Cohere's models, directly addressing the safety of individuals who consume information online.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Cohere.