By default, Mistral AI may use the prompts you type and the AI responses you receive to improve and train its AI models, unless you actively opt out. This does not apply if you pay for an API plan or use Le Chat Enterprise.
This analysis describes what Mistral AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
Free-tier users' conversations are treated as training data by default, meaning your personal questions, instructions, and AI responses could influence how Mistral AI's models behave for all users unless you take action to opt out.
Free Le Chat users' inputs and outputs are used for AI model training under a legitimate interest basis unless they opt out through account settings; paid API and enterprise users are automatically excluded from this practice.
How other platforms handle this
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
After registration, you may create, upload or transmit files, documents, videos, images, data or information as part of your use of the Service (collectively, "User Content"). This includes any inputs you provide to our AI-powered support tools and outputs generated in response to your inputs. User ...
Monitoring
Mistral AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"To train our artificial intelligence models (large language models) to answer questions, generate text, translate, summarize and correct text, classify text, analyze feelings, etc. according to context, Inputs (e-mails, letters, reports, computer code, etc.) and Outputs. [...] Your Input and Output, subject to your opt-out. Please note that we do not use your Input and Output to train our artificial intelligence models when you use Le Chat Enterprise or the paid version of our APIs. If you connect a third-party service to the Mistral AI Products, we do not use this third-party service to train our artificial intelligence models.— Excerpt from Mistral AI's Mistral AI Privacy Policy
1. REGULATORY LANDSCAPE: This provision engages GDPR, specifically the requirements around lawful basis for processing and the right to object under Article 21 for legitimate interest processing. The French data protection authority (CNIL) has primary enforcement jurisdiction given Mistral AI's French incorporation, though the European Data Protection Board may also have relevance given cross-border processing. The use of legitimate interest rather than consent for model training may require a documented balancing test that weighs Mistral AI's interest against user privacy expectations. 2. GOVERNANCE EXPOSURE: High. The opt-out structure for model training places the compliance burden on users rather than requiring affirmative consent, which may not align with regulatory expectations where AI training on personal data is involved. CNIL and other EU DPAs have signaled heightened scrutiny of AI training data practices, and the legitimate interest basis requires a documented balancing test that must be available to regulators on request. 3. JURISDICTION FLAGS: EU and EEA users face the most direct exposure given GDPR applicability and CNIL jurisdiction. California residents may have CCPA-based rights to know and opt out of certain data uses. The policy does not explicitly address how non-EU users' rights to object are handled, which may create gaps for users in jurisdictions with emerging AI data governance frameworks. 4. CONTRACT AND VENDOR IMPLICATIONS: Procurement teams evaluating Mistral AI for enterprise use should confirm whether their subscription tier (Le Chat Enterprise or paid API) automatically excludes training use, and obtain written confirmation of this exclusion. B2B contracts should specify data processing terms and confirm zero data retention options where applicable. 5. COMPLIANCE CONSIDERATIONS: Legal teams should verify that the legitimate interest balancing test for model training has been formally documented and is available for regulatory review. Consent mechanisms for opt-out should be audited for accessibility and effectiveness. Data mapping should confirm that opted-out users' data flows are segregated from training pipelines. Any changes to training practices should trigger re-evaluation of the lawful basis documentation.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
Free-tier users' conversations are treated as training data by default, meaning your personal questions, instructions, and AI responses could influence how Mistral AI's models behave for all users unless you take action to opt out.
Free Le Chat users' inputs and outputs are used for AI model training under a legitimate interest basis unless they opt out through account settings; paid API and enterprise users are automatically excluded from this practice.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Mistral AI.