Organizations that deploy Stability AI models in their own products are required to pass through acceptable use obligations to their own users, meaning end users of third-party applications built on these models are also bound by Stability AI's use restrictions.
This analysis describes what Stability AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision creates a compliance obligation for deployers to implement and enforce acceptable use terms with their own customers, extending Stability AI's policy framework through the distribution chain.
Interpretive note: The specific downstream obligation language and its enforceability mechanism are not visible in the truncated document.
End users of applications built on Stability AI models are indirectly subject to Stability AI's acceptable use policy through the deployer's own terms of service; deployers who fail to implement these downstream obligations risk license breach.
Cross-platform context
See how other platforms handle Downstream Use Restrictions and User Obligations and similar clauses.
Compare across platforms →Monitoring
Stability AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
1) REGULATORY LANDSCAPE: Downstream use restriction obligations are relevant to contract law and may interact with consumer protection frameworks if end user terms of service do not adequately disclose the underlying model's restrictions. The EU AI Act imposes transparency obligations on deployers of AI systems toward end users, which aligns with but extends beyond the license's downstream restriction requirements. 2) GOVERNANCE EXPOSURE: Medium. Deployers must implement legally enforceable terms of service with their own users that incorporate Stability AI's acceptable use restrictions. Failure to do so creates license breach exposure. The practicality of enforcing these obligations against end users at scale varies by deployment context. 3) JURISDICTION FLAGS: EU deployers have heightened obligations under the EU AI Act to disclose AI-generated content and ensure user-facing transparency. US state consumer protection laws may impose additional disclosure requirements on AI-generated outputs in certain sectors. 4) CONTRACT AND VENDOR IMPLICATIONS: B2B deployers should review whether their customer contracts adequately incorporate downstream acceptable use obligations. Consumer-facing products should include clear acceptable use terms that meet or exceed Stability AI's requirements. Legal review of end user agreements is a triggered compliance action. 5) COMPLIANCE CONSIDERATIONS: Compliance teams should audit existing end user agreements to confirm downstream acceptable use obligations are present, implement technical and procedural controls to detect and address violations, and establish processes for responding to third-party reports of acceptable use violations.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision creates a compliance obligation for deployers to implement and enforce acceptable use terms with their own customers, extending Stability AI's policy framework through the distribution chain.
End users of applications built on Stability AI models are indirectly subject to Stability AI's acceptable use policy through the deployer's own terms of service; deployers who fail to implement these downstream obligations risk license breach.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Stability AI.