Stability AI · Stability AI Acceptable Use Policy · View original document ↗

Downstream Developer Enforcement Obligation

High severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Stability AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.

This analysis describes what Stability AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision creates a compliance chain that extends Stability AI's use restrictions beyond direct users to all downstream platforms and their end users, placing operational and legal responsibility on every layer of the deployment stack.

Interpretive note: The standard of 'appropriate controls' is not defined in the policy, creating uncertainty about what technical or procedural measures satisfy this obligation and how Stability AI would assess compliance.

Consumer impact (what this means for users)

The policy establishes conduct rules that apply to all users of Stability AI's models and services, including people using those models through third-party applications built on Stability AI's API. Users who violate the prohibited use categories risk suspension or termination of access without necessarily receiving advance notice, depending on the severity of the violation. You can review the full list of prohibited use categories at stability.ai/use-policy to assess whether your intended use cases are permitted before building or deploying applications.

How other platforms handle this

Google Gemini High

If you access our generative AI services through the API, you're also responsible for ensuring your use, and the use by those who access the services through your platform, complies with our usage policies. You must implement appropriate safeguards to prevent prohibited uses by your users.

Perplexity AI High

You are responsible for ensuring that your end users comply with these Terms and our usage policies. Any violation of these Terms by your end users will be deemed a violation by you, and we may suspend or terminate your access to the API accordingly.

Meta High

We may audit your app to ensure compliance with these Terms. You must cooperate with any audit and provide us with information and access to systems, data, and personnel necessary to conduct the audit. You must also maintain records sufficient to demonstrate your compliance with these Terms and prov...

See all platforms with this clause type →

Monitoring

Stability AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
If you access our Services through an API or otherwise integrate our models into your products or services, you must ensure that your users are prohibited from using our Services in ways that violate this Policy. You are responsible for implementing appropriate controls to prevent prohibited uses by your users.

— Excerpt from Stability AI's Stability AI Acceptable Use Policy

Applicable regulations

CFAA
United States Federal

Provision details

Document information
Document
Stability AI Acceptable Use Policy
Entity
Stability AI
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 12, 2026
Record ID
CA-P-010682
Document ID
CA-D-00772
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
6fe74fd03c821a478b697f38b02deeafcbbb7b9353c5fd3ff39e20c43b1db53c
Analysis generated
May 11, 2026 13:00 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Stability AI
Document: Stability AI Acceptable Use Policy
Record ID: CA-P-010682
Captured: 2026-05-11 13:00:52 UTC
SHA-256: 6fe74fd03c821a47…
URL: https://conductatlas.com/platform/stability-ai/stability-ai-acceptable-use-policy/downstream-developer-enforcement-obligation/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Stability AI's Downstream Developer Enforcement Obligation clause do?

This provision creates a compliance chain that extends Stability AI's use restrictions beyond direct users to all downstream platforms and their end users, placing operational and legal responsibility on every layer of the deployment stack.

Is ConductAtlas affiliated with Stability AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Stability AI.