Stability AI · Stability AI Model License · View original document ↗

Acceptable Use Policy and Prohibited Content

High severity Low confidence Inferredfromcontext Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Stability AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

The license prohibits users from generating certain categories of content with Stability AI models, including child sexual abuse material, weapons-related content, and content designed to deceive others, regardless of which license tier the user holds.

This analysis describes what Stability AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

These prohibitions apply to all licensees and flow downstream to end users of products built on self-hosted models, meaning deployers are responsible for enforcing these restrictions within their own platforms.

Interpretive note: The specific prohibited categories and their exact wording are not visible in the truncated document; this analysis is based on the known structure of Stability AI's published acceptable use policy as referenced by the page context.

Consumer impact (what this means for users)

All users of products built on Stability AI models, including end consumers of third-party applications, are indirectly subject to these content prohibitions; deployers who fail to enforce the acceptable use policy risk losing their license and creating liability for any prohibited outputs generated on their platform.

Cross-platform context

See how other platforms handle Acceptable Use Policy and Prohibited Content and similar clauses.

Compare across platforms →

Monitoring

Stability AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

1) REGULATORY LANDSCAPE: Prohibitions on CSAM generation engage federal criminal law in the US and equivalent statutes in most jurisdictions globally. Content designed to deceive engages FTC authority over deceptive practices and may interact with emerging AI transparency regulations in the EU, including provisions of the EU AI Act addressing prohibited AI practices. No specific statute articles are cited here due to document truncation. 2) GOVERNANCE EXPOSURE: High for deployers who self-host models and do not implement technical controls to enforce the acceptable use policy. In a self-hosted context, Stability AI cannot enforce these prohibitions directly; the obligation falls on the deployer to implement filtering, monitoring, or access controls. Failure to do so creates both license breach and potential regulatory and criminal exposure depending on the outputs generated. 3) JURISDICTION FLAGS: CSAM prohibitions apply universally. Deceptive content prohibitions interact with EU AI Act requirements on deep fakes and synthetic media disclosure for EU-serving deployments. California and other US state laws may impose additional disclosure obligations on AI-generated content. Illinois and other states with biometric privacy laws may be implicated if image generation models are used to generate identifiable synthetic likenesses. 4) CONTRACT AND VENDOR IMPLICATIONS: Organizations deploying self-hosted models must implement their own acceptable use enforcement mechanisms and should document these controls for legal defensibility. B2B agreements built on top of self-hosted deployments should incorporate appropriate downstream acceptable use obligations. Vendor assessments should verify that the deployer's technical controls are sufficient to prevent prohibited outputs. 5) COMPLIANCE CONSIDERATIONS: Compliance teams should conduct a content moderation audit of any platform built on self-hosted Stability AI models, implement technical safeguards against prohibited output categories, establish user-facing acceptable use terms that mirror or exceed the Stability AI policy, and maintain incident response procedures for prohibited content reports. EU-serving deployments should assess EU AI Act compliance obligations for synthetic media and prohibited AI practices.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over deceptive practices relevant to AI-generated content designed to deceive consumers, which falls within this acceptable use prohibition.
    File a complaint →

Provision details

Document information
Document
Stability AI Model License
Entity
Stability AI
Document last updated
May 12, 2026
Tracking information
First tracked
May 12, 2026
Last verified
May 12, 2026
Record ID
CA-P-011999
Document ID
CA-D-00831
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
6c56f800306de8a5ff2509a42dd1191c3301a88526fa1ed7c9deff8da8bbf53f
Analysis generated
May 12, 2026 16:57 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Stability AI
Document: Stability AI Model License
Record ID: CA-P-011999
Captured: 2026-05-12 16:57:08 UTC
SHA-256: 6c56f800306de8a5…
URL: https://conductatlas.com/platform/stability-ai/stability-ai-model-license/acceptable-use-policy-and-prohibited-content/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Stability AI's Acceptable Use Policy and Prohibited Content clause do?

These prohibitions apply to all licensees and flow downstream to end users of products built on self-hosted models, meaning deployers are responsible for enforcing these restrictions within their own platforms.

How does this clause affect you?

All users of products built on Stability AI models, including end consumers of third-party applications, are indirectly subject to these content prohibitions; deployers who fail to enforce the acceptable use policy risk losing their license and creating liability for any prohibited outputs generated on their platform.

Is ConductAtlas affiliated with Stability AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Stability AI.