Mistral AI · Mistral AI Usage Policy

Security Violations and AI Safety Filter Circumvention

Medium severity
Share 𝕏 Share in Share 🔒 PDF

What it is

You cannot use Mistral AI to hack systems, create malicious software, or try to bypass Mistral's own safety measures — including attempts to 'jailbreak' the AI.

Consumer impact (what this means for users)

Users who attempt to bypass Mistral's safety filters (e.g., through prompt injection or jailbreaking techniques) violate this policy and risk permanent account termination, even if no harmful content is ultimately generated.

Cross-platform context

See how other platforms handle Security Violations and AI Safety Filter Circumvention and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

The explicit prohibition on circumventing AI safety filters — often called jailbreaking — is a notable provision that extends beyond standard cybersecurity prohibitions to cover attempts to manipulate the AI system itself.

View original clause language
You shall not use the Mistral AI Products to compromise, or attempt to compromise, the security of Mistral AI, the Mistral AI Products, or any other third party. This includes creating malware and exploiting vulnerabilities. You shall not try to circumvent security protections and AI safety filters.

Institutional analysis (Compliance & legal intelligence)

(1) REGULATORY FRAMEWORK: This provision engages the EU AI Act Article 5(1)(a)-(d) (prohibited manipulation and exploitation techniques), EU AI Act Article 15 (accuracy, robustness, and cybersecurity requirements for high-risk AI), and the EU Cybersecurity Act (ENISA framework). In the US, the Computer Fraud and Abuse Act (CFAA, 18 U.S.C. § 1030) may apply to unauthorized attempts to compromise AI systems. The EU's NIS2 Directive (Directive 2022/2555) imposes cybersecurity obligations on operators of essential and important entities that may be relevant to enterprise Mistral deployments. Enforcement authorities include ENISA, national cybersecurity authorities (ANSSI in France), and the DOJ/FBI in the US. (2)

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    The FTC has authority over unfair or deceptive practices and has issued guidance on AI security obligations for platform providers.
    File a complaint →

Provision details

Document information
Document
Mistral AI Usage Policy
Entity
Mistral AI
Document last updated
April 29, 2026
Tracking information
First tracked
April 30, 2026
Last verified
April 30, 2026
Record ID
CA-P-004158
Document ID
CA-D-00445
Evidence Provenance
Source URL
Wayback Machine
SHA-256
d65d8a1b8b57a55ee50c42e13a559c085eef6b73124deead6b2837c2784efeda
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Mistral AI | Document: Mistral AI Usage Policy | Record: CA-P-004158
Captured: 2026-04-30 06:38:30 UTC | SHA-256: d65d8a1b8b57a55e…
URL: https://conductatlas.com/platform/mistral-ai/mistral-ai-usage-policy/security-violations-and-ai-safety-filter-circumvention/
Accessed: May 2, 2026
Classification
Severity
Medium
Categories

Other provisions in this document