8 Total
2 High severity
5 Medium severity
1 Low severity
Summary

This is Mistral AI's rules document that sets out what you are and aren't allowed to do when using their AI products like Le Chat and Mistral AI Studio. The single most important thing to know is that Mistral will permanently terminate your account and report you to law enforcement if you attempt to generate CSAM, and they reserve the right to suspend or terminate accounts for any violation of their prohibited content rules. Mistral can update these rules at any time without notifying you directly, so check the policy page regularly if you rely on their services.

Technical Summary

Mistral AI's Usage Policy governs all users of Mistral AI products accessible on its platform (including Le Chat and Mistral AI Studio), explicitly excluding deployments on customer or partner infrastructure and open-source models, and operates alongside Mistral's Terms of Service as a contractual behavioral framework. The most significant obligations it creates are absolute prohibitions on specified content categories — CSAM (with mandatory law enforcement reporting and immediate account termination), non-consensual intimate imagery, hate speech, violence, fraud, misinformation, and security violations — binding on all individuals, organizations, and businesses. Notably, the policy includes a prohibition on using Mistral AI products to provide professional advice (financial, legal, or medical), which is an unusually broad restriction that could limit legitimate use cases and deviates from the more permissive stance taken by comparable AI providers such as OpenAI and Anthropic. The policy engages the EU AI Act (particularly prohibited AI practices under Article 5 and high-risk system obligations under Article 6), GDPR (regarding privacy violations using the platform), and CSAM-related mandatory reporting obligations under national laws including France's LCEN and equivalent EU member state implementations of Directive 2011/93/EU; the unilateral right to update the policy without notice or versioned consent creates a material compliance consideration for enterprise customers who have integrated the platform into regulated workflows.

Evidence Provenance
Captured April 29, 2026 08:11 UTC
Document ID CA-D-000445
Version ID CA-V-001023
Wayback Machine View archived versions →
SHA-256 31fe83c01de47d7a5b6fcd310a278d1b6a08594a71feb54fcf3f2c5f6c06ab36
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Cryptographically signed
Institutional Analysis

🔒 Institutional analysis locked

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Upgrade to Professional — $149/mo
Change Timeline
View full version history (0 captures) →
High Severity — 2 provisions
Medium Severity — 5 provisions
Low Severity — 1 provision

Cross-platform context

See how other platforms handle CSAM Zero Tolerance and Mandatory Reporting and similar clauses.

Compare across platforms →