8 Total
2 High severity
4 Medium severity
2 Low severity
Summary

This is Mistral AI's privacy policy covering how the company collects and uses your personal data when you use products like Le Chat and Mistral AI Studio. The most important thing to know is that if you use the free version of Le Chat or free APIs, Mistral AI can use your chat messages and AI-generated responses to train its AI models unless you actively opt out. You can opt out of your conversations being used for AI model training through your account settings, and you can also turn off or delete the Memory feature that stores personal details from your conversations.

Technical Summary

This Privacy Policy, effective April 8, 2026, governs Mistral AI's collection and processing of personal data for individual users of its consumer-facing products (Le Chat, Mistral AI Studio), with Mistral AI (a French entity, SIREN 952 418 325) acting as data controller and invoking GDPR-compliant lawful bases including contractual performance, legitimate interest, and consent. The document creates significant obligations for Mistral AI to honor data subject rights (access, rectification, erasure, portability, objection, and restriction) and imposes opt-out rights for users regarding the use of their Inputs and Outputs for AI model training. Notably, the policy uses 'legitimate interest' as the lawful basis for AI model training on free-tier user Inputs and Outputs — a contested legal ground under GDPR that has drawn regulatory scrutiny across multiple EU supervisory authorities — and explicitly carves out paid API and Le Chat Enterprise users from this training use. The policy engages GDPR (Regulation 2016/679), the French Data Protection Act (Loi Informatique et Libertés), and the EU AI Act (Regulation 2024/1689); the primary enforcement authority is the French CNIL, though cross-border processing triggers Article 56 GDPR cooperation mechanisms with other EU supervisory authorities. Material compliance considerations include the adequacy of the legitimate interest assessment (LIA) for model training, the Memory feature's handling of sensitive health data under GDPR Article 9, and the policy's explicit exclusion of business users (who are directed to a separate Data Processing Addendum).

Evidence Provenance
Captured April 29, 2026 08:11 UTC
Document ID CA-D-000443
Version ID CA-V-001021
Wayback Machine View archived versions →
SHA-256 ab6ea5e3ac35578430bfa2c6958460947ea11b245373b931812b918094ad9b7d
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Cryptographically signed
Institutional Analysis

🔒 Institutional analysis locked

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Upgrade to Professional — $149/mo
Change Timeline
View full version history (0 captures) →
High Severity — 2 provisions
Medium Severity — 4 provisions
Low Severity — 2 provisions

Cross-platform context

See how other platforms handle AI Model Training on Free-Tier User Inputs and Outputs and similar clauses.

Compare across platforms →