Cohere · Cohere Usage Policy · View original document ↗

Prohibition on Influence Operations and Disinformation

Medium severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Cohere Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Developers cannot use Cohere's models to create disinformation campaigns or conduct influence operations, including generating false narratives, fabricated content, or coordinated inauthentic behavior.

This analysis describes what Cohere's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This prohibition covers a category of AI misuse that has attracted significant regulatory and legislative attention globally; developers who build such capabilities on Cohere's infrastructure face both policy breach and increasing legal exposure as AI-generated disinformation legislation develops.

Interpretive note: The terms 'influence operations' and 'disinformation' are not defined in the document, creating interpretive ambiguity for edge cases involving persuasive content, satire, or political advertising.

Consumer impact (what this means for users)

This provision protects the general public from being targeted by AI-generated disinformation or influence operations produced using Cohere's models, directly addressing the safety of individuals who consume information online.

How other platforms handle this

OpenAI Medium

OpenAI prohibits use of its services to build AI personas to conduct covert influence operations, generating content designed for political propaganda or astroturfing campaigns, creating fake social media profiles, and generating content that falsely portrays real people.

Midjourney Medium

Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

See all platforms with this clause type →

Monitoring

Cohere has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Certain use cases, such as violence, hate speech, fraud, and privacy violations, are strictly prohibited. [The policy identifies influence operations and disinformation as prohibited use categories.]

— Excerpt from Cohere's Cohere Usage Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: AI-generated disinformation and influence operations are an emerging area of regulation. The EU AI Act and the EU Digital Services Act address certain manipulation and disinformation risks associated with AI systems. In the US, the FTC has authority over deceptive practices including AI-generated false advertising or impersonation. Several US states have enacted or proposed legislation specifically targeting AI-generated political disinformation. (2) GOVERNANCE EXPOSURE: Medium. The prohibition is categorical but the definition of 'influence operations' and 'disinformation' is not specified in the document, creating interpretive ambiguity for edge cases such as persuasive marketing content, political advertising, or satirical content. (3) JURISDICTION FLAGS: EU deployments face the highest regulatory exposure given the Digital Services Act's requirements for very large online platforms and the EU AI Act's provisions on AI-generated manipulation. US developers in political advertising or public communications contexts should monitor rapidly evolving state-level AI disclosure legislation. (4) CONTRACT AND VENDOR IMPLICATIONS: Developers building content generation or social media management tools should specifically review whether their product's capabilities could be weaponized for influence operations and include appropriate use restrictions in their own end-user agreements. (5) COMPLIANCE CONSIDERATIONS: Legal teams should assess whether any proposed use case involves persuasive content generation at scale, synthetic persona creation, or coordinated content distribution, and review these against the prohibition's scope before deployment.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over deceptive practices including AI-generated false content used in consumer-facing marketing or impersonation contexts.
    File a complaint →

Applicable regulations

CFAA
United States Federal
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Cohere Usage Policy
Entity
Cohere
Document last updated
May 5, 2026
Tracking information
First tracked
April 30, 2026
Last verified
May 12, 2026
Record ID
CA-P-011007
Document ID
CA-D-00442
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
2937f674a79ab03784eab9a8774b7c807068d6f695cd81b3eb7bc9419a338c76
Analysis generated
April 30, 2026 06:46 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Cohere
Document: Cohere Usage Policy
Record ID: CA-P-011007
Captured: 2026-04-30 06:46:20 UTC
SHA-256: 2937f674a79ab037…
URL: https://conductatlas.com/platform/cohere/cohere-usage-policy/prohibition-on-influence-operations-and-disinformation/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Cohere's Prohibition on Influence Operations and Disinformation clause do?

This prohibition covers a category of AI misuse that has attracted significant regulatory and legislative attention globally; developers who build such capabilities on Cohere's infrastructure face both policy breach and increasing legal exposure as AI-generated disinformation legislation develops.

How does this clause affect you?

This provision protects the general public from being targeted by AI-generated disinformation or influence operations produced using Cohere's models, directly addressing the safety of individuals who consume information online.

Is ConductAtlas affiliated with Cohere?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Cohere.