You cannot take videos, images, or other content created by Runway's AI and use them to build or improve a competing AI product, unless Runway gives you written permission first.
This analysis describes what Runway's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This clause asserts a contractual restriction on what users may do with AI-generated outputs after creation, which is particularly significant for developers, researchers, and businesses that might otherwise incorporate generated content into their own AI training datasets.
Interpretive note: Enforceability of post-generation output restrictions on AI-generated content is legally unsettled and may vary significantly by jurisdiction, particularly where the copyright status of AI outputs is uncertain.
The terms prohibit users from using Runway-generated content to train, fine-tune, or develop competing AI models without prior written consent from Runway, which may affect developers and organizations with AI research or product development workflows that involve synthetic media.
How other platforms handle this
You may not use the Services to develop foundation models or other large scale models that compete with Amazon Bedrock or any other AWS Service.
You may not use the Services, including any outputs, to develop, train, fine-tune, or improve any machine learning model or artificial intelligence system that competes with AI21's products or services.
Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...
Monitoring
Runway has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"You may not use any content generated by Runway's tools or services to train, fine-tune, or otherwise develop competing AI models or products without Runway's prior written consent.— Excerpt from Runway's Runway Usage Policy
REGULATORY LANDSCAPE: This provision engages copyright law frameworks relevant to AI-generated content ownership, including the ongoing regulatory and judicial uncertainty in the US and EU regarding whether AI outputs qualify for copyright protection and who holds rights to them. The FTC Act is tangentially engaged if downstream AI training use creates market effects characterized as anticompetitive; however, this provision is primarily a contractual rather than regulatory matter. EU AI Act training data provisions may require evaluation if Runway-generated outputs are used in training systems deployed in the EU. GOVERNANCE EXPOSURE: Medium. The enforceability of post-generation output restrictions on AI-generated content is legally unsettled. If Runway outputs do not qualify for copyright protection under applicable law, the contractual restriction may still be enforceable as a terms-of-service obligation, but the scope of enforcement is jurisdiction-dependent. Enterprise and developer users incorporating Runway outputs into AI pipelines face the highest operational exposure. JURISDICTION FLAGS: US (copyright uncertainty for AI outputs), EU (AI Act training data requirements), jurisdictions where AI-generated content is treated as public domain may limit enforceability of this contractual restriction. California-based AI developers should assess this restriction against California's AI governance frameworks. CONTRACT AND VENDOR IMPLICATIONS: Procurement teams should flag this restriction in vendor assessments and ensure AI training pipeline documentation does not include Runway outputs without documented written consent. B2B contracts involving AI development should include representations about compliance with upstream platform output restrictions. This clause asserts a liability risk for organizations that inadvertently incorporate Runway outputs into training data without consent. COMPLIANCE CONSIDERATIONS: Legal teams should audit existing AI training datasets for the presence of Runway-generated content and obtain written consent where applicable. Contracts with third-party AI developers should include representations about compliance with platform-level output restrictions. Organizations should establish internal controls to flag Runway-sourced content before it enters training pipelines.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This clause asserts a contractual restriction on what users may do with AI-generated outputs after creation, which is particularly significant for developers, researchers, and businesses that might otherwise incorporate generated content into their own AI training datasets.
The terms prohibit users from using Runway-generated content to train, fine-tune, or develop competing AI models without prior written consent from Runway, which may affect developers and organizations with AI research or product development workflows that involve synthetic media.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Runway.