You cannot use Mistral AI to create content that uses another real person's face, voice, or identity without their permission.
This analysis describes what Mistral AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision directly addresses deepfake-style generation and AI impersonation, which are areas of increasing regulatory attention and potential legal liability for users.
Interpretive note: The provision does not define what constitutes adequate prior consent, and the scope of 'likeness or voice' may be interpreted broadly or narrowly depending on the specific output and applicable jurisdiction.
Users who generate synthetic media using real people's likenesses or voices without consent may violate this policy and face account termination, in addition to potential separate legal liability under applicable privacy or personality rights laws.
How other platforms handle this
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.
You may not use the Services, including any outputs, to develop, train, fine-tune, or improve any machine learning model or artificial intelligence system that competes with AI21's products or services.
Monitoring
Mistral AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Generating content that invades or violates the privacy of others is prohibited. This includes, for instance, using someone else's likeness or voice to generate outputs or impersonate them, without their prior consent.— Excerpt from Mistral AI's Mistral AI Usage Policy
(1) REGULATORY LANDSCAPE: This provision engages GDPR Article 9 where biometric data is used to generate likenesses, personality rights laws in EU member states, and in the U.S. state-level right of publicity statutes (including California's and New York's) as well as emerging state deepfake laws. The EU AI Act includes specific provisions addressing AI-generated synthetic content and disclosure requirements. The FTC has signaled enforcement interest in AI-generated impersonation under the FTC Act's impersonation rule. (2) GOVERNANCE EXPOSURE: Medium to High. The scope of 'likeness or voice' is potentially broad and could encompass a range of generative outputs beyond obvious deepfakes. The consent requirement is not further defined, creating interpretive ambiguity about what constitutes adequate prior consent. (3) JURISDICTION FLAGS: Illinois's Biometric Information Privacy Act (BIPA) may be engaged where voice or facial data is processed to generate outputs; BIPA's private right of action creates heightened exposure for platform operators and potentially for users whose outputs involve Illinois residents. California's deepfake law and New York's right of publicity statute create additional state-level exposure. (4) CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers in media, marketing, or content production sectors who intend to generate synthetic content should assess whether their intended use cases comply with this provision and applicable personality rights laws, and may need to establish consent management frameworks for any real individuals whose likenesses are used. (5) COMPLIANCE CONSIDERATIONS: The provision's focus on user-generated prohibited content means that enterprise customers deploying the platform should assess whether their downstream users could generate non-consensual likeness content, and whether their own terms of service and content moderation frameworks adequately address this risk.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision directly addresses deepfake-style generation and AI impersonation, which are areas of increasing regulatory attention and potential legal liability for users.
Users who generate synthetic media using real people's likenesses or voices without consent may violate this policy and face account termination, in addition to potential separate legal liability under applicable privacy or personality rights laws.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Mistral AI.