Mistral AI · Mistral AI Usage Policy · View original document ↗

Privacy Violation Prohibition

Medium severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Recent governance activity Mistral AI recorded 4 documented changes in the last 30 days.
Start monitoring updates
Monitor governance changes for Mistral AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

You cannot use Mistral AI to create content that uses another real person's face, voice, or identity without their permission.

This analysis describes what Mistral AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision directly addresses deepfake-style generation and AI impersonation, which are areas of increasing regulatory attention and potential legal liability for users.

Interpretive note: The provision does not define what constitutes adequate prior consent, and the scope of 'likeness or voice' may be interpreted broadly or narrowly depending on the specific output and applicable jurisdiction.

Consumer impact (what this means for users)

Users who generate synthetic media using real people's likenesses or voices without consent may violate this policy and face account termination, in addition to potential separate legal liability under applicable privacy or personality rights laws.

How other platforms handle this

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

Perplexity AI Medium

You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.

AI21 Labs Medium

You may not use the Services, including any outputs, to develop, train, fine-tune, or improve any machine learning model or artificial intelligence system that competes with AI21's products or services.

See all platforms with this clause type →

Monitoring

Mistral AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Generating content that invades or violates the privacy of others is prohibited. This includes, for instance, using someone else's likeness or voice to generate outputs or impersonate them, without their prior consent.

— Excerpt from Mistral AI's Mistral AI Usage Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: This provision engages GDPR Article 9 where biometric data is used to generate likenesses, personality rights laws in EU member states, and in the U.S. state-level right of publicity statutes (including California's and New York's) as well as emerging state deepfake laws. The EU AI Act includes specific provisions addressing AI-generated synthetic content and disclosure requirements. The FTC has signaled enforcement interest in AI-generated impersonation under the FTC Act's impersonation rule. (2) GOVERNANCE EXPOSURE: Medium to High. The scope of 'likeness or voice' is potentially broad and could encompass a range of generative outputs beyond obvious deepfakes. The consent requirement is not further defined, creating interpretive ambiguity about what constitutes adequate prior consent. (3) JURISDICTION FLAGS: Illinois's Biometric Information Privacy Act (BIPA) may be engaged where voice or facial data is processed to generate outputs; BIPA's private right of action creates heightened exposure for platform operators and potentially for users whose outputs involve Illinois residents. California's deepfake law and New York's right of publicity statute create additional state-level exposure. (4) CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers in media, marketing, or content production sectors who intend to generate synthetic content should assess whether their intended use cases comply with this provision and applicable personality rights laws, and may need to establish consent management frameworks for any real individuals whose likenesses are used. (5) COMPLIANCE CONSIDERATIONS: The provision's focus on user-generated prohibited content means that enterprise customers deploying the platform should assess whether their downstream users could generate non-consensual likeness content, and whether their own terms of service and content moderation frameworks adequately address this risk.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC's impersonation rule and enforcement authority over unfair or deceptive practices are directly relevant to AI-generated impersonation and non-consensual synthetic media
    File a complaint →
  • State AG
    State attorneys general in Illinois, California, and New York have enforcement authority over biometric privacy, deepfake, and right of publicity laws that this provision engages
    File a complaint →

Applicable regulations

CFAA
United States Federal
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Mistral AI Usage Policy
Entity
Mistral AI
Document last updated
May 5, 2026
Tracking information
First tracked
May 10, 2026
Last verified
May 10, 2026
Record ID
CA-P-008545
Document ID
CA-D-00445
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
4a647ade8f5f2eb30d2e5e5ee6617d619642adf8e8b6e229a01cda9cb95fc549
Analysis generated
May 10, 2026 07:58 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Mistral AI
Document: Mistral AI Usage Policy
Record ID: CA-P-008545
Captured: 2026-05-10 07:58:26 UTC
SHA-256: 4a647ade8f5f2eb3…
URL: https://conductatlas.com/platform/mistral-ai/mistral-ai-usage-policy/privacy-violation-prohibition/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Mistral AI's Privacy Violation Prohibition clause do?

This provision directly addresses deepfake-style generation and AI impersonation, which are areas of increasing regulatory attention and potential legal liability for users.

How does this clause affect you?

Users who generate synthetic media using real people's likenesses or voices without consent may violate this policy and face account termination, in addition to potential separate legal liability under applicable privacy or personality rights laws.

Is ConductAtlas affiliated with Mistral AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Mistral AI.