Runway · Runway Usage Policy · View original document ↗

Prohibition on Synthetic Media and Deepfakes Intended to Deceive

High severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Runway Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

You cannot use Runway to make fake videos or images of real people that are designed to trick people into thinking they are real, or to impersonate someone without their permission.

This analysis describes what Runway's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision directly addresses one of the most significant societal risks associated with AI video and image generation tools: the creation of deceptive synthetic media depicting real individuals, including public figures and private persons.

Interpretive note: The provision's reliance on intent to deceive as the operative standard creates interpretive ambiguity, as intent may be difficult to establish at the point of content generation, and application varies across jurisdictions with different deepfake statute standards.

Consumer impact (what this means for users)

The terms prohibit creating deepfakes or synthetic media intended to deceive or to impersonate real individuals without consent, which establishes a clear boundary on AI-generated content that could be used for fraud, defamation, or manipulation.

How other platforms handle this

Mistral AI Medium

Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...

Perplexity AI Medium

You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.

AI21 Labs Medium

You may not use the Services, including any outputs, to develop, train, fine-tune, or improve any machine learning model or artificial intelligence system that competes with AI21's products or services.

See all platforms with this clause type →

Monitoring

Runway has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
You may not use Runway's tools to create synthetic media — including but not limited to deepfakes — that are intended to deceive viewers into thinking the content is real, or to impersonate real individuals without their consent in a misleading way.

— Excerpt from Runway's Runway Usage Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: This provision engages state-level deepfake statutes in California (AB 602, AB 730 addressing non-consensual deepfake pornography and election-related deepfakes), Texas (HB 4337), and Virginia. At the federal level, the FTC Act's prohibition on deceptive practices is engaged. The EU AI Act prohibits certain AI systems used for manipulation, and synthetic media disclosure requirements under the EU AI Act may apply to Runway as a provider. The EU's proposed AI Liability Directive may also be relevant. GOVERNANCE EXPOSURE: High, for platform-level compliance. The provision's reliance on intent ('intended to deceive') creates enforcement ambiguity: determining intent at the point of generation is operationally difficult, and Runway's ability to detect prohibited use post-generation is limited by the nature of AI output delivery. Regulatory enforcement risk is highest in the EU and in states with explicit deepfake statutes. JURISDICTION FLAGS: California, Texas, and Virginia have enacted or proposed deepfake-specific statutes. EU users are subject to EU AI Act requirements. Election-related deepfake prohibitions apply in multiple US states during election periods. The policy's prohibition applies globally under Runway's terms but regulatory enforcement depends on local law. CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers using Runway in broadcast, media, or marketing contexts should assess whether their content workflows could produce synthetic media that triggers state deepfake statutes. Indemnification provisions in enterprise agreements should be reviewed to confirm allocation of liability for policy-violating outputs generated by enterprise users. COMPLIANCE CONSIDERATIONS: Compliance teams should establish content review protocols for synthetic media outputs in regulated contexts such as political advertising, financial services, and journalism. Runway's policy should be referenced in user-facing terms for any application built on the Runway platform. Organizations in the EU should assess EU AI Act disclosure requirements for AI-generated synthetic media.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has jurisdiction over deceptive practices, and AI-generated synthetic media used to deceive consumers or impersonate individuals engages FTC Act standards.
    File a complaint →
  • State AG
    State attorneys general in California, Texas, and Virginia have enforcement authority under state-level deepfake and synthetic media statutes.
    File a complaint →

Applicable regulations

CFAA
United States Federal
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Runway Usage Policy
Entity
Runway
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 11, 2026
Record ID
CA-P-010746
Document ID
CA-D-00773
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
d90a4f3400a54d7669e1b9b15a5d0ba7bd004f5b9d282b11d7d85314456abb41
Analysis generated
May 11, 2026 22:34 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Runway
Document: Runway Usage Policy
Record ID: CA-P-010746
Captured: 2026-05-11 22:34:16 UTC
SHA-256: d90a4f3400a54d76…
URL: https://conductatlas.com/platform/runway/runway-usage-policy/prohibition-on-synthetic-media-and-deepfakes-intended-to-deceive/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Runway's Prohibition on Synthetic Media and Deepfakes Intended to Deceive clause do?

This provision directly addresses one of the most significant societal risks associated with AI video and image generation tools: the creation of deceptive synthetic media depicting real individuals, including public figures and private persons.

How does this clause affect you?

The terms prohibit creating deepfakes or synthetic media intended to deceive or to impersonate real individuals without consent, which establishes a clear boundary on AI-generated content that could be used for fraud, defamation, or manipulation.

Is ConductAtlas affiliated with Runway?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Runway.