This analysis describes what Stability AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision creates a compliance chain that extends Stability AI's use restrictions beyond direct users to all downstream platforms and their end users, placing operational and legal responsibility on every layer of the deployment stack.
Interpretive note: The standard of 'appropriate controls' is not defined in the policy, creating uncertainty about what technical or procedural measures satisfy this obligation and how Stability AI would assess compliance.
The policy establishes conduct rules that apply to all users of Stability AI's models and services, including people using those models through third-party applications built on Stability AI's API. Users who violate the prohibited use categories risk suspension or termination of access without necessarily receiving advance notice, depending on the severity of the violation. You can review the full list of prohibited use categories at stability.ai/use-policy to assess whether your intended use cases are permitted before building or deploying applications.
How other platforms handle this
If you access our generative AI services through the API, you're also responsible for ensuring your use, and the use by those who access the services through your platform, complies with our usage policies. You must implement appropriate safeguards to prevent prohibited uses by your users.
You are responsible for ensuring that your end users comply with these Terms and our usage policies. Any violation of these Terms by your end users will be deemed a violation by you, and we may suspend or terminate your access to the API accordingly.
We may audit your app to ensure compliance with these Terms. You must cooperate with any audit and provide us with information and access to systems, data, and personnel necessary to conduct the audit. You must also maintain records sufficient to demonstrate your compliance with these Terms and prov...
Monitoring
Stability AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"If you access our Services through an API or otherwise integrate our models into your products or services, you must ensure that your users are prohibited from using our Services in ways that violate this Policy. You are responsible for implementing appropriate controls to prevent prohibited uses by your users.— Excerpt from Stability AI's Stability AI Acceptable Use Policy
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision creates a compliance chain that extends Stability AI's use restrictions beyond direct users to all downstream platforms and their end users, placing operational and legal responsibility on every layer of the deployment stack.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Stability AI.