6 Total
3 High severity
3 Medium severity
0 Low severity
Summary

This is NVIDIA's Acceptable Use Policy for its NIM AI inference platform and related AI products, setting out what users are and are not permitted to do when building applications using NVIDIA's AI models and APIs. The policy prohibits a specific list of uses including generating illegal content, facilitating violence or weapons development, impersonating individuals, bypassing AI safety controls, and using outputs to train competing AI models without authorization. If you or your organization uses NVIDIA NIM, review the prohibited use list carefully before deploying the service in any application, as violations can result in immediate account suspension.

Technical / Legal Breakdown

This document governs the acceptable use of NVIDIA NIM (AI inference microservices) and related AI products under NVIDIA's enterprise software terms, establishing conditions under which users may access and deploy AI models and APIs. The agreement states that users may not use the services for prohibited purposes including generating content that violates applicable law, facilitating weapons of mass destruction, bypassing AI safety measures, or engaging in deceptive practices, and the terms authorize NVIDIA to suspend or terminate access for violations. The agreement asserts broad discretion for NVIDIA to determine what constitutes a violation and to act on that determination without prior notice, which is operationally distinct from frameworks that require notice-and-cure periods before termination. The policy engages the EU AI Act, which imposes obligations on providers and deployers of AI systems depending on risk classification, and may require evaluation under FTC Act standards regarding unfair or deceptive practices in AI-generated content. Compliance teams deploying NIM in regulated sectors including financial services, healthcare, or critical infrastructure should assess whether their specific use cases align with both NVIDIA's enumerated permitted uses and applicable sector-specific AI governance requirements.

Institutional Analysis

Institutional analysis available with Professional

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Start Professional free trial
High — 3 provisions
Medium — 3 provisions

Monitoring

NVIDIA NIM has updated this document before.

Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →

Professional Governance Intelligence

Need provision-level monitoring and regulatory mapping?

Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.

Start Professional free trial

Cross-platform context

See how other platforms handle Prohibited Use: Weapons of Mass Destruction and Harmful Content and similar clauses.

Compare across platforms →

Mapped Governance Frameworks

California AB 2013 AI Training Data Transparency
US-CA
View official text ↗
Archival ProvenanceSource & Archival Record
Last Captured May 12, 2026 06:56 UTC
Capture Method Automated scheduled archival capture
Document ID CA-D-000821
Version ID CA-V-002531
SHA-256 7e4cf188c9039baf25c39826b70668d55e99bdf6c77ffde6e6ed2f22d67f7f5b
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Hash verified

Governance Monitoring

Monitor governance changes across the platforms you rely on.

Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.

Create free account Compare plans