The policy prohibits using Cohere's AI to create fake intimate images of real people without their permission, or to produce other synthetic media depicting real individuals without consent.
This analysis describes what Cohere's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision addresses a category of AI-generated harm increasingly regulated at the state and national level, and operators building image or video generation applications on Cohere's API must implement controls to prevent this use regardless of user requests.
Interpretive note: The scope of 'synthetic media of real persons' beyond intimate imagery is not fully defined, and application to satire, journalism, or artistic uses of AI-generated likenesses may require case-by-case assessment.
Users cannot use Cohere-powered applications to generate fake intimate or deceptive synthetic media of real individuals without their consent, and operators are responsible for ensuring their platforms do not permit this use.
Cross-platform context
See how other platforms handle Prohibited Use: Non-Consensual Synthetic Media and Deepfakes and similar clauses.
Compare across platforms →Monitoring
Cohere has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Do not use Cohere's services to generate non-consensual intimate imagery or to create synthetic media of real persons without their consent.— Excerpt from Cohere's Cohere Responsible Use Policy
REGULATORY LANDSCAPE: This provision engages an expanding set of state laws in the US prohibiting non-consensual deepfake intimate imagery, including laws in California, Texas, Virginia, Georgia, and others. Federal legislative proposals exist but no comprehensive US federal law was in force at the time of this analysis. The EU AI Act and proposed EU regulations on synthetic media may also apply. The FTC has indicated interest in AI-generated deceptive content under its unfair or deceptive practices authority. GOVERNANCE EXPOSURE: Medium to High depending on the operator's use case. Operators offering image or video generation capabilities face heightened exposure given the technical ease of generating synthetic media. Platforms with large consumer user bases are particularly vulnerable to misuse. JURISDICTION FLAGS: US state laws vary significantly in their definitions, covered persons, and penalties. California, Texas, and Virginia have enacted specific non-consensual deepfake statutes. EU member states implementing the EU AI Act may impose additional obligations on AI system providers. Organizations serving global user bases face multi-jurisdictional compliance obligations. CONTRACT AND VENDOR IMPLICATIONS: Operators in media, entertainment, and content creation sectors should assess whether their applications could foreseeably be used to generate non-consensual synthetic media and implement technical and procedural controls accordingly. Vendor agreements should include representations that downstream use cases comply with applicable synthetic media laws. COMPLIANCE CONSIDERATIONS: Compliance teams should monitor evolving state and federal legislation on synthetic media, assess whether existing content moderation systems adequately detect and prevent non-consensual deepfake generation, and ensure user-facing terms of service clearly prohibit this use.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision addresses a category of AI-generated harm increasingly regulated at the state and national level, and operators building image or video generation applications on Cohere's API must implement controls to prevent this use regardless of user requests.
Users cannot use Cohere-powered applications to generate fake intimate or deceptive synthetic media of real individuals without their consent, and operators are responsible for ensuring their platforms do not permit this use.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Cohere.