Cohere · Cohere Responsible Use Policy · View original document ↗

Prohibited Use: Non-Consensual Synthetic Media and Deepfakes

High severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Cohere Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

The policy prohibits using Cohere's AI to create fake intimate images of real people without their permission, or to produce other synthetic media depicting real individuals without consent.

This analysis describes what Cohere's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision addresses a category of AI-generated harm increasingly regulated at the state and national level, and operators building image or video generation applications on Cohere's API must implement controls to prevent this use regardless of user requests.

Interpretive note: The scope of 'synthetic media of real persons' beyond intimate imagery is not fully defined, and application to satire, journalism, or artistic uses of AI-generated likenesses may require case-by-case assessment.

Consumer impact (what this means for users)

Users cannot use Cohere-powered applications to generate fake intimate or deceptive synthetic media of real individuals without their consent, and operators are responsible for ensuring their platforms do not permit this use.

Cross-platform context

See how other platforms handle Prohibited Use: Non-Consensual Synthetic Media and Deepfakes and similar clauses.

Compare across platforms →

Monitoring

Cohere has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Do not use Cohere's services to generate non-consensual intimate imagery or to create synthetic media of real persons without their consent.

— Excerpt from Cohere's Cohere Responsible Use Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: This provision engages an expanding set of state laws in the US prohibiting non-consensual deepfake intimate imagery, including laws in California, Texas, Virginia, Georgia, and others. Federal legislative proposals exist but no comprehensive US federal law was in force at the time of this analysis. The EU AI Act and proposed EU regulations on synthetic media may also apply. The FTC has indicated interest in AI-generated deceptive content under its unfair or deceptive practices authority. GOVERNANCE EXPOSURE: Medium to High depending on the operator's use case. Operators offering image or video generation capabilities face heightened exposure given the technical ease of generating synthetic media. Platforms with large consumer user bases are particularly vulnerable to misuse. JURISDICTION FLAGS: US state laws vary significantly in their definitions, covered persons, and penalties. California, Texas, and Virginia have enacted specific non-consensual deepfake statutes. EU member states implementing the EU AI Act may impose additional obligations on AI system providers. Organizations serving global user bases face multi-jurisdictional compliance obligations. CONTRACT AND VENDOR IMPLICATIONS: Operators in media, entertainment, and content creation sectors should assess whether their applications could foreseeably be used to generate non-consensual synthetic media and implement technical and procedural controls accordingly. Vendor agreements should include representations that downstream use cases comply with applicable synthetic media laws. COMPLIANCE CONSIDERATIONS: Compliance teams should monitor evolving state and federal legislation on synthetic media, assess whether existing content moderation systems adequately detect and prevent non-consensual deepfake generation, and ensure user-facing terms of service clearly prohibit this use.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over unfair or deceptive practices including AI-generated deceptive synthetic media targeting consumers
    File a complaint →
  • State AG
    Multiple state attorneys general have jurisdiction over non-consensual intimate imagery and deepfake laws in their respective states
    File a complaint →

Provision details

Document information
Document
Cohere Responsible Use Policy
Entity
Cohere
Document last updated
May 12, 2026
Tracking information
First tracked
May 12, 2026
Last verified
May 12, 2026
Record ID
CA-P-011993
Document ID
CA-D-00830
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
525a1023544d802d0b69aead1ed2f42d817072b058c572837c434d0b14e12fa2
Analysis generated
May 12, 2026 16:53 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Cohere
Document: Cohere Responsible Use Policy
Record ID: CA-P-011993
Captured: 2026-05-12 16:53:50 UTC
SHA-256: 525a1023544d802d…
URL: https://conductatlas.com/platform/cohere/cohere-responsible-use-policy/prohibited-use-non-consensual-synthetic-media-and-deepfakes/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Cohere's Prohibited Use: Non-Consensual Synthetic Media and Deepfakes clause do?

This provision addresses a category of AI-generated harm increasingly regulated at the state and national level, and operators building image or video generation applications on Cohere's API must implement controls to prevent this use regardless of user requests.

How does this clause affect you?

Users cannot use Cohere-powered applications to generate fake intimate or deceptive synthetic media of real individuals without their consent, and operators are responsible for ensuring their platforms do not permit this use.

Is ConductAtlas affiliated with Cohere?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Cohere.