This analysis describes what Cohere's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This cascading responsibility structure means that if you use an app built on Cohere's API and that app enables prohibited content, it is the app developer who bears responsibility to Cohere, not the individual end user; developers therefore need to build and maintain controls over how their users interact with the AI.
Interpretive note: The policy states developer obligations and prohibited use categories but does not explicitly articulate the legal mechanism by which end-user violations are attributed to developers; the precise scope of developer liability depends on governing law and the developer agreement.
This policy primarily governs developers and businesses building applications using Cohere's API rather than direct end consumers; however, its provisions affect end users indirectly by establishing what types of applications Cohere permits to be built on its platform. The policy prohibits developers from deploying Cohere models to generate content that facilitates violence, enables fraud, produces CSAM, or conducts unauthorized surveillance, which provides a baseline of protection for individuals who interact with Cohere-powered applications. Consumers who believe an application built on Cohere's API is violating these terms can report concerns to Cohere through the contact mechanisms referenced in the broader Terms of Service.
How other platforms handle this
You are responsible for your Applications, including ensuring that your Applications comply with these terms. You are also responsible for obtaining any required consents from end users and for any claims by end users relating to your Applications.
You are responsible for ensuring that your end users comply with these Terms and our usage policies. Any violation of these Terms by your end users will be deemed a violation by you, and we may suspend or terminate your access to the API accordingly.
If you access our generative AI services through the API, you're also responsible for ensuring your use, and the use by those who access the services through your platform, complies with our usage policies. You must implement appropriate safeguards to prevent prohibited uses by your users.
Monitoring
Cohere has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Developers must outline and get approval for their use case to access the Cohere API, understanding the models and limitations. They should refer to model cards for detailed information and document potential harms of their application. Certain use cases, such as violence, hate speech, fraud, and privacy violations, are strictly prohibited.— Excerpt from Cohere's Cohere Usage Policy
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This cascading responsibility structure means that if you use an app built on Cohere's API and that app enables prohibited content, it is the app developer who bears responsibility to Cohere, not the individual end user; developers therefore need to build and maintain controls over how their users interact with the AI.
ConductAtlas has identified this type of provision across 1 platforms. See the full comparison.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Cohere.