Users cannot use NVIDIA's AI services to create fake content that misleadingly presents itself as being from or depicting a real person, including deepfakes or synthetic media of real individuals who have not consented.
This analysis describes what NVIDIA NIM's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision directly addresses synthetic media and deepfake generation, which is an area of active legislative and regulatory development in the U.S. and EU, and establishes that such use constitutes a prohibited activity under the agreement.
Interpretive note: The document does not define the threshold between prohibited deceptive impersonation and permitted uses such as clearly labeled satire or parody involving real individuals, which may require case-by-case assessment.
Individuals whose likenesses or identities are used by third parties through NVIDIA NIM in violation of this provision have a basis for reporting the violation to NVIDIA, though the document does not specify a complaint mechanism for affected third parties.
Cross-platform context
See how other platforms handle Prohibition on Deceptive Impersonation Using AI and similar clauses.
Compare across platforms →Monitoring
NVIDIA NIM has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"You may not use the Services to generate content that impersonates any real person, living or deceased, in a deceptive or misleading manner, or to create synthetic media that falsely depicts real individuals without their consent.— Excerpt from NVIDIA NIM's NVIDIA AI Foundation Models AUP
REGULATORY LANDSCAPE: This provision engages the FTC Act's prohibition on deceptive practices, state-level deepfake statutes including those in California, Texas, and New York, and the EU AI Act's provisions on AI-generated synthetic content and transparency obligations for AI-generated media. The FTC and State Attorneys General are relevant enforcement authorities. GOVERNANCE EXPOSURE: High. Deepfake and synthetic media restrictions are increasingly subject to state and federal legislation in the U.S., and this provision aligns the contractual obligation with the direction of emerging law. However, the definition of 'deceptive or misleading manner' is not precisely defined, creating ambiguity for satire, parody, or clearly labeled synthetic media use cases. JURISDICTION FLAGS: California, Texas, and New York have enacted or are developing specific deepfake statutes that may impose obligations on platforms and developers beyond NVIDIA's contractual terms. EU users are subject to the AI Act's transparency requirements for AI-generated content, including disclosure obligations for synthetic audio-visual content. Illinois BIPA may also be relevant for biometric data used in synthetic media generation. CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers deploying NIM for content generation should implement consent verification workflows before generating synthetic media involving real individuals. B2B contracts should include downstream user obligations to comply with this restriction and applicable state deepfake laws. COMPLIANCE CONSIDERATIONS: Legal teams should assess whether existing consent frameworks cover synthetic media generation, update privacy notices to disclose AI-generated content practices, and implement technical controls to prevent unauthorized synthetic media outputs. Organizations in media, advertising, and entertainment should conduct specific legal review of this provision against their intended use cases.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision directly addresses synthetic media and deepfake generation, which is an area of active legislative and regulatory development in the U.S. and EU, and establishes that such use constitutes a prohibited activity under the agreement.
Individuals whose likenesses or identities are used by third parties through NVIDIA NIM in violation of this provision have a basis for reporting the violation to NVIDIA, though the document does not specify a complaint mechanism for affected third parties.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by NVIDIA NIM.