The policy prohibits using Cohere's AI to produce political propaganda, divisive rhetoric, targeted political advertising, or content specifically designed to interfere with elections.
This analysis describes what Cohere's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
The policy's reference to content that 'could unduly alter people's political views' or 'sow division' covers a broad range of persuasive political content, and operators deploying AI in media, communications, or public affairs contexts should assess whether their use cases fall within or outside this prohibition.
Interpretive note: The threshold for content that 'could unduly alter people's political views' is not defined and may be applied inconsistently; legitimate journalism and civic engagement use cases may require clarification from Cohere.
Operators and users cannot use Cohere's services to generate political propaganda, election interference content, or targeted political messaging based on ideological profiling, which affects media, political consulting, and communications platforms built on the API.
Cross-platform context
See how other platforms handle Prohibited Use: Election Interference and Political Disinformation and similar clauses.
Compare across platforms →Monitoring
Cohere has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Do not use Cohere's services to generate rhetoric that could unduly alter people's political views, sow division, or be used for political ads, propaganda, or targeting strategies based on political ideology, or to create content designed to interfere with elections.— Excerpt from Cohere's Cohere Responsible Use Policy
REGULATORY LANDSCAPE: This provision engages Federal Election Commission (FEC) regulations on political advertising and AI-generated political content, emerging state laws requiring disclosure of AI in political advertising (California, Texas, Minnesota, and others), and international election integrity frameworks. The FTC may have jurisdiction over deceptive AI-generated political content. The EU AI Act classifies certain AI uses in electoral contexts as high-risk. GOVERNANCE EXPOSURE: Medium. The phrase 'unduly alter people's political views' lacks a defined threshold, creating interpretive ambiguity for operators in legitimate journalism, public affairs, or civic engagement contexts. The prohibition on 'targeting strategies based on political ideology' may affect operators using AI for audience segmentation. JURISDICTION FLAGS: The US does not currently have a comprehensive federal AI-in-elections statute, but multiple states have enacted or proposed disclosure requirements. EU member states face obligations under the EU AI Act's high-risk classification for electoral AI systems. Organizations operating across jurisdictions face a complex and evolving regulatory landscape. CONTRACT AND VENDOR IMPLICATIONS: Media companies, political technology firms, and public affairs consultancies using the Cohere API should assess whether their use cases involve content categories covered by this prohibition and document their compliance rationale. B2B agreements should include representations regarding permitted use in political and electoral contexts. COMPLIANCE CONSIDERATIONS: Compliance teams should monitor FEC guidance on AI in political advertising, review state-level AI disclosure requirements, and assess whether content generation workflows include adequate human review for politically sensitive outputs.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
The policy's reference to content that 'could unduly alter people's political views' or 'sow division' covers a broad range of persuasive political content, and operators deploying AI in media, communications, or public affairs contexts should assess whether their use cases fall within or outside this prohibition.
Operators and users cannot use Cohere's services to generate political propaganda, election interference content, or targeted political messaging based on ideological profiling, which affects media, political consulting, and communications platforms built on the API.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Cohere.