Stability AI · Stability AI Acceptable Use Policy · View original document ↗

Prohibition on Harassment and Non-Consensual Intimate Imagery

High severity Low confidence Inferredfromcontext Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Stability AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

The policy prohibits using Stability AI's models to generate content that harasses, threatens, or targets individuals, including generating intimate or sexual imagery of real people without their consent.

This analysis describes what Stability AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

Non-consensual intimate image generation is criminalized or subject to civil liability in a growing number of jurisdictions, and this prohibition establishes that producing such content using Stability AI's tools constitutes a policy violation regardless of whether the output is photorealistic.

Interpretive note: Exact verbatim text was unavailable due to HTML truncation; the specific scope of the harassment and NCII prohibition, including any carve-outs for artistic or research contexts, cannot be confirmed without the full document.

Consumer impact (what this means for users)

Users who generate harassing content or non-consensual intimate imagery of real individuals using Stability AI's models violate this provision and may have their access terminated; platform operators who fail to prevent such use on their services may also face AUP breach.

How other platforms handle this

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

Mistral AI Medium

Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...

Perplexity AI Medium

You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.

See all platforms with this clause type →

Monitoring

Stability AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: Non-consensual intimate image (NCII) generation is addressed by the UK Online Safety Act 2023 (which created a new criminal offense for sharing or threatening to share intimate images without consent), the DEFIANCE Act (US federal, enacted 2024, creating a civil right of action for victims of AI-generated NCII), and state statutes in California, Virginia, Texas, Georgia, and others. The EU AI Act's provisions on manipulation and the DSA's illegal content obligations are also engaged. The FTC's authority over unfair practices may apply to platforms that enable NCII generation. (2) GOVERNANCE EXPOSURE: High for operators building image or video generation tools accessible to general consumers. The DEFIANCE Act creates a federal civil right of action for victims, and state NCII statutes impose criminal and civil liability that may extend to platforms that knowingly facilitate such generation. (3) JURISDICTION FLAGS: UK, US (federal and multiple states), and EU member states all have enacted or are enacting NCII-specific laws. Operators with users in these jurisdictions face compound exposure from the AUP prohibition and applicable national law. Illinois, New York, and additional states are advancing NCII-specific legislation. (4) CONTRACT AND VENDOR IMPLICATIONS: Platform operators deploying Stability AI's image generation capabilities in consumer products should implement technical safeguards including face detection and identity-matching classifiers to reduce NCII generation risk. Legal teams should assess whether their platform design creates conditions for NCII liability under applicable state or federal statutes. (5) COMPLIANCE CONSIDERATIONS: Operators should document their NCII prevention measures and assess whether their content moderation policies satisfy applicable regulatory requirements. Legal teams should evaluate whether terms of service adequately disclaim and prohibit NCII generation and whether user reporting mechanisms meet regulatory expectations under the Online Safety Act or DSA.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over unfair and deceptive practices by platforms that fail to implement adequate safeguards against non-consensual intimate image generation despite representing their services as responsible or safe
    File a complaint →
  • State AG
    State attorneys general enforce NCII statutes in California, Virginia, Texas, and other jurisdictions where non-consensual intimate image generation and sharing are subject to civil or criminal penalties
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Stability AI Acceptable Use Policy
Entity
Stability AI
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 12, 2026
Record ID
CA-P-011539
Document ID
CA-D-00772
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
6fe74fd03c821a478b697f38b02deeafcbbb7b9353c5fd3ff39e20c43b1db53c
Analysis generated
May 11, 2026 13:00 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Stability AI
Document: Stability AI Acceptable Use Policy
Record ID: CA-P-011539
Captured: 2026-05-11 13:00:52 UTC
SHA-256: 6fe74fd03c821a47…
URL: https://conductatlas.com/platform/stability-ai/stability-ai-acceptable-use-policy/prohibition-on-harassment-and-non-consensual-intimate-imagery/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Stability AI's Prohibition on Harassment and Non-Consensual Intimate Imagery clause do?

Non-consensual intimate image generation is criminalized or subject to civil liability in a growing number of jurisdictions, and this prohibition establishes that producing such content using Stability AI's tools constitutes a policy violation regardless of whether the output is photorealistic.

How does this clause affect you?

Users who generate harassing content or non-consensual intimate imagery of real individuals using Stability AI's models violate this provision and may have their access terminated; platform operators who fail to prevent such use on their services may also face AUP breach.

Is ConductAtlas affiliated with Stability AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Stability AI.