Mistral AI · Mistral AI Privacy Policy · View original document ↗

Model Training Use of User Inputs and Outputs

Medium severity High confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Recent governance activity Mistral AI recorded 4 documented changes in the last 30 days.
Start monitoring updates
Monitor governance changes for Mistral AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

By default, Mistral AI may use the prompts you type and the AI responses you receive to improve and train its AI models, unless you actively opt out. This does not apply if you pay for an API plan or use Le Chat Enterprise.

This analysis describes what Mistral AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

Free-tier users' conversations are treated as training data by default, meaning your personal questions, instructions, and AI responses could influence how Mistral AI's models behave for all users unless you take action to opt out.

Consumer impact (what this means for users)

Free Le Chat users' inputs and outputs are used for AI model training under a legitimate interest basis unless they opt out through account settings; paid API and enterprise users are automatically excluded from this practice.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Opt Out of Arbitration
    Log into your Mistral AI account, navigate to your user preferences or account settings, and locate the data usage or privacy section to opt out of having your Input and Output used for model training.

How other platforms handle this

Windsurf Medium

We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...

Ideogram Medium

We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.

Supabase Medium

After registration, you may create, upload or transmit files, documents, videos, images, data or information as part of your use of the Service (collectively, "User Content"). This includes any inputs you provide to our AI-powered support tools and outputs generated in response to your inputs. User ...

See all platforms with this clause type →

Monitoring

Mistral AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
To train our artificial intelligence models (large language models) to answer questions, generate text, translate, summarize and correct text, classify text, analyze feelings, etc. according to context, Inputs (e-mails, letters, reports, computer code, etc.) and Outputs. [...] Your Input and Output, subject to your opt-out. Please note that we do not use your Input and Output to train our artificial intelligence models when you use Le Chat Enterprise or the paid version of our APIs. If you connect a third-party service to the Mistral AI Products, we do not use this third-party service to train our artificial intelligence models.

— Excerpt from Mistral AI's Mistral AI Privacy Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

1. REGULATORY LANDSCAPE: This provision engages GDPR, specifically the requirements around lawful basis for processing and the right to object under Article 21 for legitimate interest processing. The French data protection authority (CNIL) has primary enforcement jurisdiction given Mistral AI's French incorporation, though the European Data Protection Board may also have relevance given cross-border processing. The use of legitimate interest rather than consent for model training may require a documented balancing test that weighs Mistral AI's interest against user privacy expectations. 2. GOVERNANCE EXPOSURE: High. The opt-out structure for model training places the compliance burden on users rather than requiring affirmative consent, which may not align with regulatory expectations where AI training on personal data is involved. CNIL and other EU DPAs have signaled heightened scrutiny of AI training data practices, and the legitimate interest basis requires a documented balancing test that must be available to regulators on request. 3. JURISDICTION FLAGS: EU and EEA users face the most direct exposure given GDPR applicability and CNIL jurisdiction. California residents may have CCPA-based rights to know and opt out of certain data uses. The policy does not explicitly address how non-EU users' rights to object are handled, which may create gaps for users in jurisdictions with emerging AI data governance frameworks. 4. CONTRACT AND VENDOR IMPLICATIONS: Procurement teams evaluating Mistral AI for enterprise use should confirm whether their subscription tier (Le Chat Enterprise or paid API) automatically excludes training use, and obtain written confirmation of this exclusion. B2B contracts should specify data processing terms and confirm zero data retention options where applicable. 5. COMPLIANCE CONSIDERATIONS: Legal teams should verify that the legitimate interest balancing test for model training has been formally documented and is available for regulatory review. Consent mechanisms for opt-out should be audited for accessibility and effectiveness. Data mapping should confirm that opted-out users' data flows are segregated from training pipelines. Any changes to training practices should trigger re-evaluation of the lawful basis documentation.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over unfair or deceptive data practices affecting US consumers, including default opt-in data use practices for AI training.
    File a complaint →

Applicable regulations

EU AI Act
European Union
California AB 2013 AI Training Data Transparency
US-CA
Colorado AI Act
US-CO
EU AI Act - High Risk Provisions
EU
GDPR
European Union
Texas AI Act
Texas, USA
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Mistral AI Privacy Policy
Entity
Mistral AI
Document last updated
May 5, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 11, 2026
Record ID
CA-P-010425
Document ID
CA-D-00443
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
a3774c814d80737846c7ac8379ec7dcc1c55ee8e0300de40dccee951ff5d0230
Analysis generated
May 11, 2026 05:55 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Mistral AI
Document: Mistral AI Privacy Policy
Record ID: CA-P-010425
Captured: 2026-05-11 05:55:06 UTC
SHA-256: a3774c814d807378…
URL: https://conductatlas.com/platform/mistral-ai/mistral-ai-privacy-policy/model-training-use-of-user-inputs-and-outputs/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Related Analysis

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Mistral AI's Model Training Use of User Inputs and Outputs clause do?

Free-tier users' conversations are treated as training data by default, meaning your personal questions, instructions, and AI responses could influence how Mistral AI's models behave for all users unless you take action to opt out.

How does this clause affect you?

Free Le Chat users' inputs and outputs are used for AI model training under a legitimate interest basis unless they opt out through account settings; paid API and enterprise users are automatically excluded from this practice.

Is ConductAtlas affiliated with Mistral AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Mistral AI.