Businesses and developers who access Stability AI's models through the API and build them into their own products are responsible for ensuring their platforms comply with the AUP and that their end users do not violate its terms.
This analysis describes what Stability AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision creates a layered obligation structure in which the entity closest to the end user (the API customer or operator) bears contractual responsibility for policy compliance throughout their deployment, not merely at the point of API access.
Interpretive note: The exact scope and mechanism of operator responsibility, including whether it includes indemnification obligations or audit rights, cannot be confirmed without access to the full policy text.
If you use a product built on Stability AI's API, the developer of that product is contractually obligated to enforce these use restrictions; if the developer fails to do so and your content violates the AUP, Stability AI may take action against the developer's API access, which could affect your ability to continue using that product.
How other platforms handle this
You are responsible for ensuring that your end users comply with these Terms and our usage policies. Any violation of these Terms by your end users will be deemed a violation by you, and we may suspend or terminate your access to the API accordingly.
Developers must outline and get approval for their use case to access the Cohere API, understanding the models and limitations. They should refer to model cards for detailed information and document potential harms of their application. Certain use cases, such as violence, hate speech, fraud, and pr...
If you access our generative AI services through the API, you're also responsible for ensuring your use, and the use by those who access the services through your platform, complies with our usage policies. You must implement appropriate safeguards to prevent prohibited uses by your users.
Monitoring
Stability AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
(1) REGULATORY LANDSCAPE: This operator responsibility model is consistent with the intermediary liability frameworks being established by the EU Digital Services Act (DSA), which distinguishes between hosting providers and platforms with direct user relationships and imposes tiered obligations accordingly. The EU AI Act similarly distinguishes between AI system providers and deployers, with deployers bearing specific obligations regarding prohibited and high-risk use case compliance. In the US, Section 230 of the Communications Decency Act provides conditional liability protection for platforms, but does not immunize active facilitation of prohibited content. (2) GOVERNANCE EXPOSURE: Medium to High for API customers. Operators who integrate Stability AI models into consumer products without adequate terms of service, content moderation, and user consent mechanisms face potential breach of the AUP and API access termination, as well as independent regulatory liability under applicable national frameworks. The DSA's due diligence requirements for platforms with EU users may require operators to document their content moderation systems. (3) JURISDICTION FLAGS: EU operators are subject to DSA obligations requiring documented content moderation procedures and reporting mechanisms. UK operators face Online Safety Act obligations requiring risk assessments and content controls. US operators should assess whether their platforms constitute interactive computer services under Section 230 and what content moderation obligations apply. (4) CONTRACT AND VENDOR IMPLICATIONS: API customers must ensure their own terms of service with end users incorporate the AUP's prohibited use categories, either by reference or equivalent provisions. Procurement teams evaluating Stability AI as a vendor should treat the AUP's operator responsibility clause as a material contractual obligation requiring active implementation, not passive acknowledgment. Failure to maintain compliant downstream terms could constitute breach of the API agreement. (5) COMPLIANCE CONSIDERATIONS: Operators should conduct a terms of service audit to confirm their user agreements incorporate or are consistent with the AUP's prohibited use categories. Legal teams should implement a content moderation framework appropriate to their user base and use case, and should document their compliance posture in case of dispute with Stability AI or regulatory inquiry.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision creates a layered obligation structure in which the entity closest to the end user (the API customer or operator) bears contractual responsibility for policy compliance throughout their deployment, not merely at the point of API access.
If you use a product built on Stability AI's API, the developer of that product is contractually obligated to enforce these use restrictions; if the developer fails to do so and your content violates the AUP, Stability AI may take action against the developer's API access, which could affect your ability to continue using that product.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Stability AI.