You are not allowed to reverse engineer, copy, or create derivative works based on Cohere's AI models or services, and you cannot help others do so either.
Enterprise customers cannot inspect or audit the internal workings of Cohere's AI models under these Terms, which creates challenges for AI governance, bias auditing, and regulatory compliance in sectors requiring algorithmic transparency.
Cross-platform context
See how other platforms handle Prohibition on Reverse Engineering and similar clauses.
Compare across platforms →This clause prevents users from understanding how Cohere's AI models work internally, which limits transparency and the ability to audit AI systems for bias or accuracy — a growing concern in regulated AI deployments.
(1) REGULATORY FRAMEWORK: Reverse engineering prohibitions engage the EU AI Act Art. 13 (transparency obligations for high-risk AI systems) and Art. 12 (logging requirements), which may require deployers to have sufficient insight into AI system operation to fulfill their own compliance obligations. The DMCA (17 USC §1201) in the US provides limited reverse engineering exemptions for interoperability and security research. The EU's Cyber Resilience Act may further limit such prohibitions for security research purposes. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.