Cohere states that it will not use the data you send through its platform (such as prompts and documents) to train its AI models unless you specifically agree to allow it.
This analysis describes what Cohere's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision directly addresses a common concern for enterprise customers deploying AI: whether proprietary business data submitted as prompts or documents could be incorporated into shared model training. The document states this requires explicit opt-in rather than an opt-out.
Interpretive note: The document is a commitments page rather than a binding contract; enforceability depends on whether these commitments are incorporated into the executed master service agreement.
Enterprise customers can submit prompts, completions, and documents to Cohere's platform with the assurance, per this document, that such data will not be used to train Cohere's models unless the customer has expressly opted in. The practical protection depends on whether this commitment is enforceable under the executed service agreement.
How other platforms handle this
Writer does not use Customer Data to train its AI models without explicit customer permission. Customer Data means the data, content, and information that customers and their end users submit to or through the Services.
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
Monitoring
Cohere has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Cohere does not use customer data to train its models. Enterprise customers must explicitly opt in for their data to be used for any model training purposes.— Excerpt from Cohere's Cohere Enterprise Data Commitments
(1) REGULATORY LANDSCAPE: This provision engages GDPR Article 28 processor obligations, which require that a data processor act only on documented instructions from the controller. If enterprise customer data includes personal data, using it for model training without a documented legal basis and controller instruction would implicate GDPR obligations. The FTC Act's prohibition on unfair or deceptive practices is also relevant if commitments made on this page are not operationally implemented. Relevant enforcement authorities include EU data protection authorities, the UK ICO, and the FTC. (2) GOVERNANCE EXPOSURE: High. The commitment not to use customer data for model training without opt-in is a foundational assurance in enterprise AI procurement. If this commitment is not reflected in the executed master service agreement or data processing addendum, its enforceability is uncertain. Organizations with strict data governance policies should confirm this commitment is contractually binding. (3) JURISDICTION FLAGS: EU and EEA customers face heightened exposure because GDPR Article 28 requires that processing restrictions be documented in a binding data processing agreement, not merely disclosed on a commitments page. California enterprise customers should assess whether CCPA processor obligations align with this commitment. Regulated sectors (healthcare, financial services) should evaluate against sector-specific data handling requirements. (4) CONTRACT AND VENDOR IMPLICATIONS: Procurement teams should confirm that the no-training commitment is expressly included in the master service agreement or a separately executed data processing addendum. A commitments page that is not incorporated by reference into the contract may not constitute a binding obligation. Vendor risk assessments should document this confirmation. (5) COMPLIANCE CONSIDERATIONS: Compliance teams should audit whether the opt-in mechanism for model training is clearly defined, documented, and auditable. Data mapping exercises should classify customer prompt and completion data and verify that processing is restricted to the stated permitted purposes. Any changes to this commitment should trigger contract amendment review and potentially customer notification obligations.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision directly addresses a common concern for enterprise customers deploying AI: whether proprietary business data submitted as prompts or documents could be incorporated into shared model training. The document states this requires explicit opt-in rather than an opt-out.
Enterprise customers can submit prompts, completions, and documents to Cohere's platform with the assurance, per this document, that such data will not be used to train Cohere's models unless the customer has expressly opted in. The practical protection depends on whether this commitment is enforceable under the executed service agreement.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Cohere.