Stability AI · Stability AI Acceptable Use Policy · View original document ↗

Prohibition on Deceptive Synthetic Media and Deepfakes

High severity Low confidence Inferredfromcontext Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Stability AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

The policy prohibits using Stability AI's models to create synthetic media, including realistic images, video, or audio of real people, that is designed to deceive viewers about its artificial origin or to misrepresent a real person's statements or actions.

This analysis describes what Stability AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This prohibition covers a category of AI-generated content that is increasingly the subject of specific legislation in multiple jurisdictions, and it establishes that creating non-consensual intimate imagery or politically deceptive deepfakes using Stability AI's tools violates the policy.

Interpretive note: The exact verbatim text was unavailable due to HTML truncation; this provision's scope and specific carve-outs cannot be confirmed without access to the full policy text.

Consumer impact (what this means for users)

Users who generate realistic synthetic media of real individuals without their consent, or who create content designed to deceive about its AI origin in contexts where this causes harm, may have their access to Stability AI's services terminated; this clause also has implications for operators building applications that could be used to produce such content at scale.

How other platforms handle this

OpenAI Medium

Don't claim to be human when directly and sincerely asked, use AI to deceive people about its fundamental nature, or impersonate real people or organizations in misleading ways.

Amazon Medium

Fraud and Deception. Attempting to defraud or misrepresent yourself or your services to others, including impersonating individuals or entities. Engaging in phishing, pharming, or other deceptive activities.

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

See all platforms with this clause type →

Monitoring

Stability AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: This provision engages a rapidly expanding body of deepfake-specific legislation including the DEEPFAKES Accountability Act (proposed federal, US), California AB 602 and AB 730 (non-consensual intimate deepfakes and election deepfakes), Texas SB 751, Virginia Code 18.2-386.2, and the UK Online Safety Act 2023 which criminalizes non-consensual intimate image sharing including AI-generated material. The EU AI Act prohibits certain AI-generated content manipulation and requires disclosure of AI-generated content in specified contexts. The FTC's authority over deceptive practices is also engaged where AI-generated content is used in commercial communications. (2) GOVERNANCE EXPOSURE: High for operators deploying Stability AI models in consumer-facing applications, particularly social media tools, video editing platforms, and marketing technology, where end users may generate non-consensual intimate imagery or election-related deepfakes. State-level statutory damages provisions in California and other jurisdictions create direct financial exposure for platform operators. (3) JURISDICTION FLAGS: California, Texas, Virginia, Georgia, and New York have enacted or are advancing deepfake-specific statutes. UK law now criminalizes non-consensual intimate synthetic images. EU member states are implementing DSA and AI Act provisions requiring synthetic media disclosure. Operators with users in these jurisdictions face heightened exposure. (4) CONTRACT AND VENDOR IMPLICATIONS: API customers building image or video generation tools should assess whether their product design enables non-consensual deepfake creation and implement technical and policy safeguards. Procurement teams should evaluate whether their existing terms of service adequately prohibit this use and whether their content moderation infrastructure is sufficient. (5) COMPLIANCE CONSIDERATIONS: Operators should implement content provenance mechanisms such as C2PA watermarking or equivalent disclosure tools to satisfy emerging regulatory requirements for AI-generated content identification. Legal teams should map their user base against applicable deepfake statutes to identify jurisdictions requiring specific disclosures or prohibitions.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over deceptive practices in commercial contexts, including AI-generated synthetic media used in advertising or commercial communications
    File a complaint →
  • State AG
    Multiple state attorneys general enforce deepfake-specific statutes covering non-consensual intimate imagery and election-related synthetic media
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Stability AI Acceptable Use Policy
Entity
Stability AI
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 12, 2026
Record ID
CA-P-011534
Document ID
CA-D-00772
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
6fe74fd03c821a478b697f38b02deeafcbbb7b9353c5fd3ff39e20c43b1db53c
Analysis generated
May 11, 2026 13:00 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Stability AI
Document: Stability AI Acceptable Use Policy
Record ID: CA-P-011534
Captured: 2026-05-11 13:00:52 UTC
SHA-256: 6fe74fd03c821a47…
URL: https://conductatlas.com/platform/stability-ai/stability-ai-acceptable-use-policy/prohibition-on-deceptive-synthetic-media-and-deepfakes/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Stability AI's Prohibition on Deceptive Synthetic Media and Deepfakes clause do?

This prohibition covers a category of AI-generated content that is increasingly the subject of specific legislation in multiple jurisdictions, and it establishes that creating non-consensual intimate imagery or politically deceptive deepfakes using Stability AI's tools violates the policy.

How does this clause affect you?

Users who generate realistic synthetic media of real individuals without their consent, or who create content designed to deceive about its AI origin in contexts where this causes harm, may have their access to Stability AI's services terminated; this clause also has implications for operators building applications that could be used to produce such content at scale.

Is ConductAtlas affiliated with Stability AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Stability AI.