8 Total
3 High severity
5 Medium severity
0 Low severity
Summary

This is Runway's acceptable use policy, which sets the rules for what you can and cannot create using Runway's AI video, image, and audio generation tools. The policy prohibits generating sexual content involving minors, non-consensual intimate imagery, deepfakes designed to deceive, content facilitating real-world violence, and using Runway's AI outputs to train other AI models without Runway's written permission. If you violate these rules, Runway states it may suspend or terminate your account.

Technical / Legal Breakdown

This document is Runway's Usage Policy (last updated March 6, 2026), governing acceptable use of Runway's AI-powered creative tools and platform, operating in conjunction with Runway's Standard Terms of Use and Enterprise Agreement. The policy states that users must not use the platform to generate content depicting minors in sexual contexts, facilitate violence or terrorism, produce non-consensual intimate imagery, create targeted harassment, generate deceptive synthetic media (deepfakes) intended to deceive, or enable mass surveillance; the terms also prohibit automated scraping, reverse engineering, and use of outputs to train competing AI models without written consent. The prohibition on using Runway outputs to train competing AI systems without written consent is operationally distinct in that it asserts a post-generation restriction on output use, the enforceability of which may vary depending on applicable copyright law and the jurisdiction's treatment of AI-generated content ownership. The policy engages the EU AI Act, which classifies certain AI applications (including emotion recognition, biometric categorization, and real-time remote biometric identification) as high-risk or prohibited; it also engages FTC Act standards regarding deceptive practices, particularly given provisions on synthetic media and impersonation. Material compliance considerations include age verification obligations under COPPA and equivalent frameworks for any minor-adjacent content generation, and obligations under emerging state-level deepfake and synthetic media statutes in jurisdictions such as California, Texas, and Virginia.

Institutional Analysis

Institutional analysis available with Professional

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Start Professional free trial
High — 3 provisions
Medium — 5 provisions

Monitoring

Runway has updated this document before.

Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →

Professional Governance Intelligence

Need provision-level monitoring and regulatory mapping?

Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.

Start Professional free trial

Cross-platform context

See how other platforms handle Characters and Game Worlds: Under-18 and Professional Advice Restrictions and similar clauses.

Compare across platforms →

Mapped Governance Frameworks

California AB 2013 AI Training Data Transparency
US-CA
View official text ↗
Archival ProvenanceSource & Archival Record
Last Captured May 11, 2026 19:22 UTC
Capture Method Automated scheduled archival capture
Document ID CA-D-000773
Version ID CA-V-002457
SHA-256 2bf638fcdea730b5a117d8269ce28b912cdbafc39667a3cf9a5e240b4a615696
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Hash verified

Governance Monitoring

Monitor governance changes across the platforms you rely on.

Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.

Create free account Compare plans