Mistral AI can use your conversations to train its AI models if you are on the free plan or certain paid plans (Le Chat Pro or Le Chat Student) and have not turned off this option in your settings. If you give feedback like thumbs up or down, that data and the related conversation can always be used for training regardless of your opt-out status.
This analysis describes what Mistral AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
Your conversations may contribute to improving Mistral AI's models by default on free and some paid plans, meaning the things you type into the service could be reviewed and incorporated into future AI training unless you take action to opt out.
Users on free, Le Chat Pro, or Le Chat Student plans will have their Input and Output data used for AI training unless they actively opt out via account settings. Feedback data including the associated conversation is used for training regardless of opt-out status.
How other platforms handle this
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
Writer does not use Customer Data to train its AI models without explicit customer permission. Customer Data means the data, content, and information that customers and their end users submit to or through the Services.
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
Monitoring
Mistral AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We do not use Your Data to train our artificial intelligence models except (a) when you (i) use Mistral AI Products under a free subscription, or are subscribed to Le Chat Pro or Le Chat Student, and (ii) you have not opted-out of training, (b) when you provide Feedback to us, or (c) when Your Data is flagged as part of our automated moderation or reported as prohibited content, in which case we may use Your Data to improve the Mistral AI Products and enforce our Usage Policy.— Excerpt from Mistral AI's Mistral AI Terms of Service
(1) REGULATORY LANDSCAPE: This provision implicates data protection frameworks in the jurisdictions of affected users, including the UK GDPR, Canada's PIPEDA, Brazil's LGPD, Australia's Privacy Act, and the CCPA for California residents. The FTC Act may apply to US users if the opt-out mechanism is not sufficiently prominent or accessible. The EU AI Act imposes training data governance obligations on Mistral AI as a French provider of general-purpose AI systems, which interact with this provision's data sourcing practices. The relevant enforcement authorities include the UK Information Commissioner's Office, the French CNIL as Mistral AI's home regulator, and US state attorneys general for applicable state law claims. (2) GOVERNANCE EXPOSURE: High. The default-on training use structure for free and specified paid subscribers creates exposure where applicable law requires affirmative opt-in consent for secondary processing, including in jurisdictions applying the UK GDPR or similar frameworks. The carve-out for feedback data, which is always usable for training, may also require evaluation under purpose limitation principles. The phrase 'to the extent permitted by applicable law' qualifies the data ownership grant but does not address the training consent mechanism directly. (3) JURISDICTION FLAGS: California residents may have rights under CCPA regarding use of personal information for secondary purposes. UK users may be entitled to greater protections under UK GDPR, where the basis for secondary processing and opt-out adequacy will be relevant. Canadian users may invoke PIPEDA's meaningful consent requirements. EU residents are excluded from this document but Mistral AI's French domicile means CNIL oversight applies to the company's training data practices globally. Jurisdictions requiring opt-in consent rather than opt-out create heightened exposure. (4) CONTRACT AND VENDOR IMPLICATIONS: Procurement teams integrating Mistral AI into employee-facing workflows should confirm that the opt-out is activated by default for organizational accounts or that the commercial terms, rather than these consumer terms, govern. The training data license granted to Mistral AI is described as worldwide, non-exclusive, royalty-free, and sublicensable to delegates and subcontractors, which raises vendor chain visibility questions for data governance teams. B2B contracts should specify which terms govern to avoid inadvertent application of this training provision. (5) COMPLIANCE CONSIDERATIONS: Compliance teams should audit whether the opt-out setting is documented in onboarding flows and whether users receive clear notice of the default training use at account creation. Data mapping exercises should identify whether employee or customer data entered into Mistral AI consumer products could fall within the training scope. Organizations should review the feedback mechanism specifically, as it operates as an unconditional training consent trigger that bypasses the general opt-out.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
Your conversations may contribute to improving Mistral AI's models by default on free and some paid plans, meaning the things you type into the service could be reviewed and incorporated into future AI training unless you take action to opt out.
Users on free, Le Chat Pro, or Le Chat Student plans will have their Input and Output data used for AI training unless they actively opt out via account settings. Feedback data including the associated conversation is used for training regardless of opt-out status.
ConductAtlas has identified this type of provision across 1 platforms. See the full comparison.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Mistral AI.