Mistral AI · Mistral AI Usage Policy

Non-Consensual Intimate Imagery Prohibition

High severity
Share 𝕏 Share in Share 🔒 PDF

What it is

You cannot use Mistral AI to create intimate or sexual images of real people unless all people depicted have explicitly agreed to it.

Consumer impact (what this means for users)

Any user who attempts to generate non-consensual intimate images of another person using Mistral AI products violates this policy and may be subject to account termination, as well as potential criminal or civil liability under applicable national laws.

Cross-platform context

See how other platforms handle Non-Consensual Intimate Imagery Prohibition and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

This provision addresses a growing harm from generative AI — deepfake intimate imagery — and aligns with emerging legislation in multiple jurisdictions criminalizing AI-generated non-consensual intimate images.

View original clause language
You shall not use the Mistral AI Products to generate intimate images of any person without the explicit consent of all individuals involved.

Institutional analysis (Compliance & legal intelligence)

(1) REGULATORY FRAMEWORK: This provision aligns with the EU AI Act Article 5's prohibited practices regarding AI systems that could be used to exploit or deceive individuals, and with criminal law provisions in multiple EU member states and the UK (UK Online Safety Act 2023, Section 188, which criminalizes sharing non-consensual intimate images; similar provisions in France under Loi n° 2018-703). In the US, the DEFIANCE Act (2024) and various state-level laws (California AB 602, Virginia Code § 18.2-386.2) create civil and criminal liability for AI-generated non-consensual intimate imagery. The FTC Act Section 5 is implicated where platforms fail to adequately prevent or address this harm. (2)

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    The FTC has authority over unfair or deceptive practices and has signaled enforcement interest in AI-generated non-consensual intimate imagery as a consumer harm.
    File a complaint →
  • State AG
    State attorneys general in California, Virginia, Texas, and other states have jurisdiction to enforce state laws criminalizing or creating civil liability for AI-generated non-consensual intimate imagery.
    File a complaint →

Provision details

Document information
Document
Mistral AI Usage Policy
Entity
Mistral AI
Document last updated
April 29, 2026
Tracking information
First tracked
April 30, 2026
Last verified
April 30, 2026
Record ID
CA-P-004157
Document ID
CA-D-00445
Evidence Provenance
Source URL
Wayback Machine
SHA-256
d65d8a1b8b57a55ee50c42e13a559c085eef6b73124deead6b2837c2784efeda
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Mistral AI | Document: Mistral AI Usage Policy | Record: CA-P-004157
Captured: 2026-04-30 06:38:30 UTC | SHA-256: d65d8a1b8b57a55e…
URL: https://conductatlas.com/platform/mistral-ai/mistral-ai-usage-policy/non-consensual-intimate-imagery-prohibition/
Accessed: May 2, 2026
Classification
Severity
High
Categories

Other provisions in this document