You cannot use Mistral AI to create intimate or sexual images of real people unless all people depicted have explicitly agreed to it.
Any user who attempts to generate non-consensual intimate images of another person using Mistral AI products violates this policy and may be subject to account termination, as well as potential criminal or civil liability under applicable national laws.
Cross-platform context
See how other platforms handle Non-Consensual Intimate Imagery Prohibition and similar clauses.
Compare across platforms →This provision addresses a growing harm from generative AI — deepfake intimate imagery — and aligns with emerging legislation in multiple jurisdictions criminalizing AI-generated non-consensual intimate images.
(1) REGULATORY FRAMEWORK: This provision aligns with the EU AI Act Article 5's prohibited practices regarding AI systems that could be used to exploit or deceive individuals, and with criminal law provisions in multiple EU member states and the UK (UK Online Safety Act 2023, Section 188, which criminalizes sharing non-consensual intimate images; similar provisions in France under Loi n° 2018-703). In the US, the DEFIANCE Act (2024) and various state-level laws (California AB 602, Virginia Code § 18.2-386.2) create civil and criminal liability for AI-generated non-consensual intimate imagery. The FTC Act Section 5 is implicated where platforms fail to adequately prevent or address this harm. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.