The license prohibits users from generating certain categories of content with Stability AI models, including child sexual abuse material, weapons-related content, and content designed to deceive others, regardless of which license tier the user holds.
This analysis describes what Stability AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
These prohibitions apply to all licensees and flow downstream to end users of products built on self-hosted models, meaning deployers are responsible for enforcing these restrictions within their own platforms.
Interpretive note: The specific prohibited categories and their exact wording are not visible in the truncated document; this analysis is based on the known structure of Stability AI's published acceptable use policy as referenced by the page context.
All users of products built on Stability AI models, including end consumers of third-party applications, are indirectly subject to these content prohibitions; deployers who fail to enforce the acceptable use policy risk losing their license and creating liability for any prohibited outputs generated on their platform.
Cross-platform context
See how other platforms handle Acceptable Use Policy and Prohibited Content and similar clauses.
Compare across platforms →Monitoring
Stability AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
1) REGULATORY LANDSCAPE: Prohibitions on CSAM generation engage federal criminal law in the US and equivalent statutes in most jurisdictions globally. Content designed to deceive engages FTC authority over deceptive practices and may interact with emerging AI transparency regulations in the EU, including provisions of the EU AI Act addressing prohibited AI practices. No specific statute articles are cited here due to document truncation. 2) GOVERNANCE EXPOSURE: High for deployers who self-host models and do not implement technical controls to enforce the acceptable use policy. In a self-hosted context, Stability AI cannot enforce these prohibitions directly; the obligation falls on the deployer to implement filtering, monitoring, or access controls. Failure to do so creates both license breach and potential regulatory and criminal exposure depending on the outputs generated. 3) JURISDICTION FLAGS: CSAM prohibitions apply universally. Deceptive content prohibitions interact with EU AI Act requirements on deep fakes and synthetic media disclosure for EU-serving deployments. California and other US state laws may impose additional disclosure obligations on AI-generated content. Illinois and other states with biometric privacy laws may be implicated if image generation models are used to generate identifiable synthetic likenesses. 4) CONTRACT AND VENDOR IMPLICATIONS: Organizations deploying self-hosted models must implement their own acceptable use enforcement mechanisms and should document these controls for legal defensibility. B2B agreements built on top of self-hosted deployments should incorporate appropriate downstream acceptable use obligations. Vendor assessments should verify that the deployer's technical controls are sufficient to prevent prohibited outputs. 5) COMPLIANCE CONSIDERATIONS: Compliance teams should conduct a content moderation audit of any platform built on self-hosted Stability AI models, implement technical safeguards against prohibited output categories, establish user-facing acceptable use terms that mirror or exceed the Stability AI policy, and maintain incident response procedures for prohibited content reports. EU-serving deployments should assess EU AI Act compliance obligations for synthetic media and prohibited AI practices.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
These prohibitions apply to all licensees and flow downstream to end users of products built on self-hosted models, meaning deployers are responsible for enforcing these restrictions within their own platforms.
All users of products built on Stability AI models, including end consumers of third-party applications, are indirectly subject to these content prohibitions; deployers who fail to enforce the acceptable use policy risk losing their license and creating liability for any prohibited outputs generated on their platform.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Stability AI.