You cannot use Mistral AI to hack systems, create malicious software, or try to bypass Mistral's own safety measures — including attempts to 'jailbreak' the AI.
Users who attempt to bypass Mistral's safety filters (e.g., through prompt injection or jailbreaking techniques) violate this policy and risk permanent account termination, even if no harmful content is ultimately generated.
Cross-platform context
See how other platforms handle Security Violations and AI Safety Filter Circumvention and similar clauses.
Compare across platforms →The explicit prohibition on circumventing AI safety filters — often called jailbreaking — is a notable provision that extends beyond standard cybersecurity prohibitions to cover attempts to manipulate the AI system itself.
(1) REGULATORY FRAMEWORK: This provision engages the EU AI Act Article 5(1)(a)-(d) (prohibited manipulation and exploitation techniques), EU AI Act Article 15 (accuracy, robustness, and cybersecurity requirements for high-risk AI), and the EU Cybersecurity Act (ENISA framework). In the US, the Computer Fraud and Abuse Act (CFAA, 18 U.S.C. § 1030) may apply to unauthorized attempts to compromise AI systems. The EU's NIS2 Directive (Directive 2022/2555) imposes cybersecurity obligations on operators of essential and important entities that may be relevant to enterprise Mistral deployments. Enforcement authorities include ENISA, national cybersecurity authorities (ANSSI in France), and the DOJ/FBI in the US. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.