Track 1 platform and get the weekly governance digest. No credit card required.
This page describes what the document states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability may vary by jurisdiction. Methodology
This is Runway's acceptable use policy, which sets the rules for what you can and cannot create using Runway's AI video, image, and audio generation tools. The policy prohibits generating sexual content involving minors, non-consensual intimate imagery, deepfakes designed to deceive, content facilitating real-world violence, and using Runway's AI outputs to train other AI models without Runway's written permission. If you violate these rules, Runway states it may suspend or terminate your account.
This document is Runway's Usage Policy (last updated March 6, 2026), governing acceptable use of Runway's AI-powered creative tools and platform, operating in conjunction with Runway's Standard Terms of Use and Enterprise Agreement. The policy states that users must not use the platform to generate content depicting minors in sexual contexts, facilitate violence or terrorism, produce non-consensual intimate imagery, create targeted harassment, generate deceptive synthetic media (deepfakes) intended to deceive, or enable mass surveillance; the terms also prohibit automated scraping, reverse engineering, and use of outputs to train competing AI models without written consent. The prohibition on using Runway outputs to train competing AI systems without written consent is operationally distinct in that it asserts a post-generation restriction on output use, the enforceability of which may vary depending on applicable copyright law and the jurisdiction's treatment of AI-generated content ownership. The policy engages the EU AI Act, which classifies certain AI applications (including emotion recognition, biometric categorization, and real-time remote biometric identification) as high-risk or prohibited; it also engages FTC Act standards regarding deceptive practices, particularly given provisions on synthetic media and impersonation. Material compliance considerations include age verification obligations under COPPA and equivalent frameworks for any minor-adjacent content generation, and obligations under emerging state-level deepfake and synthetic media statutes in jurisdictions such as California, Texas, and Virginia.
Institutional analysis available with Professional
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Start Professional free trialMonitoring
Runway has updated this document before.
Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
Professional Governance Intelligence
Need provision-level monitoring and regulatory mapping?
Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.
Start Professional free trialCross-platform context
See how other platforms handle Characters and Game Worlds: Under-18 and Professional Advice Restrictions and similar clauses.
Compare across platforms →Governance Monitoring
Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.