Track 1 platform and get the weekly governance digest. No credit card required.
This page describes what the document states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability may vary by jurisdiction. Methodology
This is Stability AI's Acceptable Use Policy, which sets the rules for how anyone can use Stability AI's image, video, audio, and language AI models, whether directly through the website or via the API in third-party applications. The policy prohibits specific categories of harmful content generation, including any sexual content involving minors, deepfakes designed to deceive, content promoting violence, and use of the AI to develop weapons or attack critical infrastructure. If you use Stability AI through a third-party app, the developer of that app is also bound by these rules and is responsible for ensuring their platform complies.
This document is Stability AI's Acceptable Use Policy (AUP), which governs the permitted and prohibited uses of Stability AI's models, APIs, and services, establishing a contractual framework under which users and developers may access the company's generative AI outputs. The agreement states that users must not use the services for unlawful purposes, generation of content that sexualizes minors, creation of disinformation or misleading synthetic media, harassment, or development of weapons or harmful systems, and the terms authorize Stability AI to suspend or terminate access for violations. The policy applies both to direct end-users and to developers or operators who deploy Stability AI models in downstream applications, which creates a layered compliance obligation where API customers bear responsibility for ensuring their platforms comply with the AUP. The document engages regulatory frameworks including the EU AI Act, which imposes requirements on providers and deployers of AI systems regarding prohibited use cases and high-risk applications, as well as relevant national laws on child sexual abuse material, export controls, and content moderation obligations under the UK Online Safety Act and the EU Digital Services Act. Compliance teams should note that the AUP's downstream operator obligations may require contractual flow-down provisions in B2B agreements, and that the prohibited use categories touching on biometric data, political manipulation, and critical infrastructure interact with sector-specific regulations across multiple jurisdictions.
Institutional analysis available with Professional
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Start Professional free trialMonitoring
Stability AI has updated this document before.
Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
Professional Governance Intelligence
Need provision-level monitoring and regulatory mapping?
Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.
Start Professional free trialCross-platform context
See how other platforms handle CSAM and Child Sexual Exploitation Prohibition and similar clauses.
Compare across platforms →Governance Monitoring
Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.