7 Total
4 High severity
3 Medium severity
0 Low severity
Summary

This is Stability AI's Acceptable Use Policy, which sets the rules for how anyone can use Stability AI's image, video, audio, and language AI models, whether directly through the website or via the API in third-party applications. The policy prohibits specific categories of harmful content generation, including any sexual content involving minors, deepfakes designed to deceive, content promoting violence, and use of the AI to develop weapons or attack critical infrastructure. If you use Stability AI through a third-party app, the developer of that app is also bound by these rules and is responsible for ensuring their platform complies.

Technical / Legal Breakdown

This document is Stability AI's Acceptable Use Policy (AUP), which governs the permitted and prohibited uses of Stability AI's models, APIs, and services, establishing a contractual framework under which users and developers may access the company's generative AI outputs. The agreement states that users must not use the services for unlawful purposes, generation of content that sexualizes minors, creation of disinformation or misleading synthetic media, harassment, or development of weapons or harmful systems, and the terms authorize Stability AI to suspend or terminate access for violations. The policy applies both to direct end-users and to developers or operators who deploy Stability AI models in downstream applications, which creates a layered compliance obligation where API customers bear responsibility for ensuring their platforms comply with the AUP. The document engages regulatory frameworks including the EU AI Act, which imposes requirements on providers and deployers of AI systems regarding prohibited use cases and high-risk applications, as well as relevant national laws on child sexual abuse material, export controls, and content moderation obligations under the UK Online Safety Act and the EU Digital Services Act. Compliance teams should note that the AUP's downstream operator obligations may require contractual flow-down provisions in B2B agreements, and that the prohibited use categories touching on biometric data, political manipulation, and critical infrastructure interact with sector-specific regulations across multiple jurisdictions.

Institutional Analysis

Institutional analysis available with Professional

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Start Professional free trial
High — 4 provisions
Medium — 3 provisions

Monitoring

Stability AI has updated this document before.

Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →

Professional Governance Intelligence

Need provision-level monitoring and regulatory mapping?

Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.

Start Professional free trial

Cross-platform context

See how other platforms handle CSAM and Child Sexual Exploitation Prohibition and similar clauses.

Compare across platforms →

Mapped Governance Frameworks

California AB 2013 AI Training Data Transparency
US-CA
View official text ↗
DMCA
United States Federal
View official text ↗
DSA
European Union
View official text ↗
Archival ProvenanceSource & Archival Record
Last Captured May 11, 2026 10:32 UTC
Capture Method Automated scheduled archival capture
Document ID CA-D-000772
Version ID CA-V-002399
SHA-256 97243bfbcafaa8ec231cbf1c7f8c8ae3a12786517a8390a19011cdbe233bec46
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Hash verified

Governance Monitoring

Monitor governance changes across the platforms you rely on.

Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.

Create free account Compare plans