NVIDIA NIM · NVIDIA AI Foundation Models AUP · View original document ↗

Prohibition on Deceptive Impersonation Using AI

High severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for NVIDIA NIM Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Users cannot use NVIDIA's AI services to create fake content that misleadingly presents itself as being from or depicting a real person, including deepfakes or synthetic media of real individuals who have not consented.

This analysis describes what NVIDIA NIM's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision directly addresses synthetic media and deepfake generation, which is an area of active legislative and regulatory development in the U.S. and EU, and establishes that such use constitutes a prohibited activity under the agreement.

Interpretive note: The document does not define the threshold between prohibited deceptive impersonation and permitted uses such as clearly labeled satire or parody involving real individuals, which may require case-by-case assessment.

Consumer impact (what this means for users)

Individuals whose likenesses or identities are used by third parties through NVIDIA NIM in violation of this provision have a basis for reporting the violation to NVIDIA, though the document does not specify a complaint mechanism for affected third parties.

Cross-platform context

See how other platforms handle Prohibition on Deceptive Impersonation Using AI and similar clauses.

Compare across platforms →

Monitoring

NVIDIA NIM has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
You may not use the Services to generate content that impersonates any real person, living or deceased, in a deceptive or misleading manner, or to create synthetic media that falsely depicts real individuals without their consent.

— Excerpt from NVIDIA NIM's NVIDIA AI Foundation Models AUP

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: This provision engages the FTC Act's prohibition on deceptive practices, state-level deepfake statutes including those in California, Texas, and New York, and the EU AI Act's provisions on AI-generated synthetic content and transparency obligations for AI-generated media. The FTC and State Attorneys General are relevant enforcement authorities. GOVERNANCE EXPOSURE: High. Deepfake and synthetic media restrictions are increasingly subject to state and federal legislation in the U.S., and this provision aligns the contractual obligation with the direction of emerging law. However, the definition of 'deceptive or misleading manner' is not precisely defined, creating ambiguity for satire, parody, or clearly labeled synthetic media use cases. JURISDICTION FLAGS: California, Texas, and New York have enacted or are developing specific deepfake statutes that may impose obligations on platforms and developers beyond NVIDIA's contractual terms. EU users are subject to the AI Act's transparency requirements for AI-generated content, including disclosure obligations for synthetic audio-visual content. Illinois BIPA may also be relevant for biometric data used in synthetic media generation. CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers deploying NIM for content generation should implement consent verification workflows before generating synthetic media involving real individuals. B2B contracts should include downstream user obligations to comply with this restriction and applicable state deepfake laws. COMPLIANCE CONSIDERATIONS: Legal teams should assess whether existing consent frameworks cover synthetic media generation, update privacy notices to disclose AI-generated content practices, and implement technical controls to prevent unauthorized synthetic media outputs. Organizations in media, advertising, and entertainment should conduct specific legal review of this provision against their intended use cases.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has jurisdiction over deceptive AI-generated content including synthetic media used in a misleading manner, and has issued guidance on AI impersonation
    File a complaint →
  • State AG
    Multiple states including California, Texas, and New York have enacted or are developing deepfake-specific statutes enforced by State Attorneys General
    File a complaint →

Provision details

Document information
Document
NVIDIA AI Foundation Models AUP
Entity
NVIDIA NIM
Document last updated
May 12, 2026
Tracking information
First tracked
May 12, 2026
Last verified
May 12, 2026
Record ID
CA-P-011965
Document ID
CA-D-00821
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
41d8df21537bcb19cecceb53970dcae928102707e3b71a722cc1b090cbf6a1c6
Analysis generated
May 12, 2026 16:37 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: NVIDIA NIM
Document: NVIDIA AI Foundation Models AUP
Record ID: CA-P-011965
Captured: 2026-05-12 16:37:18 UTC
SHA-256: 41d8df21537bcb19…
URL: https://conductatlas.com/platform/nvidia-nim/nvidia-ai-foundation-models-aup/prohibition-on-deceptive-impersonation-using-ai/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does NVIDIA NIM's Prohibition on Deceptive Impersonation Using AI clause do?

This provision directly addresses synthetic media and deepfake generation, which is an area of active legislative and regulatory development in the U.S. and EU, and establishes that such use constitutes a prohibited activity under the agreement.

How does this clause affect you?

Individuals whose likenesses or identities are used by third parties through NVIDIA NIM in violation of this provision have a basis for reporting the violation to NVIDIA, though the document does not specify a complaint mechanism for affected third parties.

Is ConductAtlas affiliated with NVIDIA NIM?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by NVIDIA NIM.