8 Total
6 High severity
2 Medium severity
0 Low severity
Summary

This is Cohere's rulebook for how its AI models and API services may and may not be used, covering everyone from individual developers to large enterprises building products on top of Cohere's technology. The policy prohibits specific uses including generating content related to weapons of mass destruction, child sexual abuse material, cyberattacks, non-consensual intimate imagery, and coordinated disinformation, and it holds operators (businesses using the API) responsible for ensuring their applications do not enable these uses downstream. If you are building a product or application using Cohere's API, review whether your use case falls within the permitted categories and ensure your own terms of service and content moderation practices align with Cohere's restrictions.

Technical / Legal Breakdown

This document is Cohere's Acceptable Use Policy (AUP) governing permissible and prohibited uses of its AI models, APIs, and related services, including the Command R and Command R+ model family. The terms establish categories of prohibited conduct, including using the services to generate content that facilitates weapons of mass destruction, CSAM, cyberattacks, disinformation, and non-consensual intimate imagery, and the agreement states that operators and users bear responsibility for ensuring downstream applications comply with these restrictions. The policy distinguishes between operator-level permissions (businesses building on the API) and end-user-level permissions, establishing a layered responsibility framework in which operators may customize permitted behaviors within bounds Cohere sets, which is operationally notable for B2B compliance teams assessing liability allocation in API deployment contexts. The document engages regulatory frameworks including the EU AI Act, which classifies certain AI applications as prohibited or high-risk, and relevant national laws governing CSAM, cyberweapons, and election interference; compliance obligations under these frameworks may constrain or supplement how the AUP terms apply in practice, depending on jurisdiction. Material compliance considerations include the policy's prohibition on use in certain high-stakes autonomous decision-making contexts without human oversight, and its explicit restriction on generating synthetic media of real persons without consent, both of which interact with emerging AI governance obligations in the EU and several US states.

Institutional Analysis

Institutional analysis available with Professional

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Start Professional free trial
High — 6 provisions
Medium — 2 provisions

Monitoring

Cohere has updated this document before.

Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →

Professional Governance Intelligence

Need provision-level monitoring and regulatory mapping?

Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.

Start Professional free trial

Cross-platform context

See how other platforms handle Operator Responsibility for Downstream Use and similar clauses.

Compare across platforms →

Mapped Governance Frameworks

California AB 2013 AI Training Data Transparency
US-CA
View official text ↗
Archival ProvenanceSource & Archival Record
Last Captured May 12, 2026 05:44 UTC
Capture Method Automated scheduled archival capture
Document ID CA-D-000830
Version ID CA-V-002484
SHA-256 03becf091e2454db0d8976bfced05081cc44247c63151117b5b5e4373f22ba84
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Hash verified

Governance Monitoring

Monitor governance changes across the platforms you rely on.

Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.

Create free account Compare plans