You may not attempt to reverse engineer Mistral AI's models, use outputs to reconstruct how the AI works, or conduct security testing on the platform without authorization.
This analysis describes what Mistral AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
The prohibition on security and penetration testing is notable for enterprise customers who have standard security due diligence requirements, as independent security assessments of the platform are prohibited without Mistral AI's authorization under these terms.
Commercial customers are prohibited from independently testing the security of Mistral AI's platform or using AI outputs to reconstruct the underlying model, which may limit the security due diligence options available to organizations with formal vendor security assessment requirements.
How other platforms handle this
You may not: (i) use the Services to develop or improve a competing product or service; (ii) reverse engineer, decompile, disassemble, or otherwise attempt to discover the source code or underlying components of the Services; or (iii) use automated means to access or interact with the Services excep...
You agree not to (and not to allow any third party to): (i) decompile, reverse engineer, disassemble, attempt to derive the source code of, or decrypt the Services; (ii) make any modification, adaptation, improvement, enhancement, translation or derivative work from the Services; (iii) violate any a...
You may not use automated tools to scrape, crawl, or extract data or content from Runway's platform, or attempt to reverse engineer, decompile, or otherwise derive the source code or underlying models of Runway's tools and services.
Monitoring
Mistral AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Output or any modified version of the Output to do any of the foregoing (except to the extent this restriction is prohibited by applicable law); (e) use the Output or any modified version of the Output to reverse engineer the Mistral AI Products; (f) compromise or attempt to compromise the security or proper functionality of the Mistral AI Products, including interfering with, circumventing, or bypassing security or moderation mechanisms in the Mistral AI Products or performing any vulnerability, penetration, or similar testing of the Mistral AI Products.— Excerpt from Mistral AI's Mistral AI Commercial Terms
(1) REGULATORY LANDSCAPE: The prohibition on reverse engineering includes a statutory carve-out acknowledging that applicable law may override this restriction in certain jurisdictions (notably EU Directive 2009/24/EC on software interoperability allows some reverse engineering). The security testing prohibition may interact with enterprise security vendor assessment requirements and SOC 2 or ISO 27001 compliance frameworks that mandate independent security testing of third-party vendors. (2) GOVERNANCE EXPOSURE: Medium. Organizations with formal vendor security assessment programs may find that this provision limits their ability to conduct independent penetration testing of the Mistral AI platform, requiring reliance on Mistral AI's own security certifications and attestations rather than independent verification. (3) JURISDICTION FLAGS: EU organizations benefit from the statutory carve-out for reverse engineering under EU software law. The security testing prohibition is not jurisdiction-specific but may conflict with enterprise security policies that require independent testing of AI system vendors, particularly in regulated sectors. (4) CONTRACT AND VENDOR IMPLICATIONS: Procurement teams should assess whether Mistral AI's security certifications and documentation are sufficient to meet their vendor due diligence requirements given that independent penetration testing is contractually prohibited. Organizations may want to negotiate provisions in an Order Form for authorized security testing if their security governance framework requires it. (5) COMPLIANCE CONSIDERATIONS: Security and compliance teams should request Mistral AI's available security certifications, audit reports, and penetration testing results as part of vendor onboarding, and assess whether the prohibition on independent security testing creates a gap in their vendor risk management program.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
The prohibition on security and penetration testing is notable for enterprise customers who have standard security due diligence requirements, as independent security assessments of the platform are prohibited without Mistral AI's authorization under these terms.
Commercial customers are prohibited from independently testing the security of Mistral AI's platform or using AI outputs to reconstruct the underlying model, which may limit the security due diligence options available to organizations with formal vendor security assessment requirements.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Mistral AI.