You cannot use Mistral AI to deliberately spread false information, particularly about health, politics, science, or elections, or to spread harmful conspiracy theories.
Users who generate deliberately false or politically manipulative content using Mistral AI products — including AI-generated election misinformation — violate this policy and face account suspension or termination.
Cross-platform context
See how other platforms handle Misinformation Prohibition and similar clauses.
Compare across platforms →The explicit reference to content that 'undermines the integrity of a civic or political process' addresses AI-generated election interference, which is a growing regulatory concern in the EU, US, and globally.
(1) REGULATORY FRAMEWORK: This provision engages the EU AI Act Article 50 (transparency obligations for AI-generated content, including deepfakes and synthetic media), the EU Digital Services Act (DSA) Articles 34-35 (systemic risk assessments and mitigation for very large online platforms regarding disinformation), EU Code of Practice on Disinformation (2022), and GDPR Article 22 (automated decision-making) where misinformation relates to personal data. In the US, FTC Act Section 5 applies to deceptive AI-generated content in commercial contexts. Election-related misinformation engages FEC regulations and state election laws. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.