Track 1 platform and get the weekly governance digest. No credit card required.
This page describes what the document states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability may vary by jurisdiction. Methodology
This is Cohere's rulebook for how its AI models and API services may and may not be used, covering everyone from individual developers to large enterprises building products on top of Cohere's technology. The policy prohibits specific uses including generating content related to weapons of mass destruction, child sexual abuse material, cyberattacks, non-consensual intimate imagery, and coordinated disinformation, and it holds operators (businesses using the API) responsible for ensuring their applications do not enable these uses downstream. If you are building a product or application using Cohere's API, review whether your use case falls within the permitted categories and ensure your own terms of service and content moderation practices align with Cohere's restrictions.
This document is Cohere's Acceptable Use Policy (AUP) governing permissible and prohibited uses of its AI models, APIs, and related services, including the Command R and Command R+ model family. The terms establish categories of prohibited conduct, including using the services to generate content that facilitates weapons of mass destruction, CSAM, cyberattacks, disinformation, and non-consensual intimate imagery, and the agreement states that operators and users bear responsibility for ensuring downstream applications comply with these restrictions. The policy distinguishes between operator-level permissions (businesses building on the API) and end-user-level permissions, establishing a layered responsibility framework in which operators may customize permitted behaviors within bounds Cohere sets, which is operationally notable for B2B compliance teams assessing liability allocation in API deployment contexts. The document engages regulatory frameworks including the EU AI Act, which classifies certain AI applications as prohibited or high-risk, and relevant national laws governing CSAM, cyberweapons, and election interference; compliance obligations under these frameworks may constrain or supplement how the AUP terms apply in practice, depending on jurisdiction. Material compliance considerations include the policy's prohibition on use in certain high-stakes autonomous decision-making contexts without human oversight, and its explicit restriction on generating synthetic media of real persons without consent, both of which interact with emerging AI governance obligations in the EU and several US states.
Institutional analysis available with Professional
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Start Professional free trialMonitoring
Cohere has updated this document before.
Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
Professional Governance Intelligence
Need provision-level monitoring and regulatory mapping?
Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.
Start Professional free trialCross-platform context
See how other platforms handle Operator Responsibility for Downstream Use and similar clauses.
Compare across platforms →Governance Monitoring
Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.