Stability AI · Stability AI Acceptable Use Policy · View original document ↗

Prohibition on Child Sexual Abuse Material (CSAM)

High severity Low confidence Inferredfromcontext Rare · 1 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Stability AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

The policy prohibits using Stability AI's models to generate any sexual content involving minors, regardless of whether the output is photorealistic or stylized.

This analysis describes what Stability AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

Generation of CSAM is a criminal offense in virtually all jurisdictions, and this prohibition establishes that any such use constitutes an immediate and absolute violation of the policy with no exceptions or contextual carve-outs.

Interpretive note: The exact verbatim text of this provision was not available due to HTML truncation; the description is based on publicly known AUP content and the document's stated subject matter regarding responsible AI use.

Consumer impact (what this means for users)

Any user or developer who generates sexual content involving minors using Stability AI's services violates this provision and may have their access terminated; this prohibition also applies to operators who deploy Stability AI models in downstream applications and fail to prevent such generation.

How other platforms handle this

Amazon Medium

You may not use the Services to: violate the security or integrity of any network, computer or communications system, software application, or network or computing device; access or use any system without permission, including attempting to probe, scan, or test the vulnerability of a system or to br...

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

Mistral AI Medium

Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...

See all platforms with this clause type →

Monitoring

Stability AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: This provision directly implicates criminal statutes in all major jurisdictions, including the PROTECT Act in the United States, which criminalizes virtual CSAM including AI-generated material, and equivalent laws in the UK under the Sexual Offences Act and Protection of Children Act, and in the EU under Directive 2011/93/EU. The National Center for Missing and Exploited Children (NCMEC) operates the CyberTipline, and platforms with knowledge of CSAM are subject to mandatory reporting obligations under 18 U.S.C. 2258A in the US. Enforcement authorities include the FBI, Internet Crimes Against Children task forces, and equivalent national agencies. (2) GOVERNANCE EXPOSURE: High. Operators who build platforms on Stability AI's API and fail to implement adequate safeguards against CSAM generation face potential criminal liability, mandatory reporting obligations, and civil exposure independent of Stability AI's own policy enforcement. (3) JURISDICTION FLAGS: This prohibition applies globally with no jurisdictional carve-outs. All major jurisdictions impose criminal liability for CSAM generation. AI-generated CSAM has been explicitly addressed by US federal prosecutors and UK law enforcement, and legal ambiguity in some jurisdictions regarding purely synthetic material is narrowing rapidly. (4) CONTRACT AND VENDOR IMPLICATIONS: API customers should ensure their own terms of service explicitly prohibit CSAM generation, implement technical safeguards such as content classifiers, and document their compliance posture. Failure to flow down this prohibition to end users creates direct contractual breach exposure with Stability AI and potential independent legal liability. (5) COMPLIANCE CONSIDERATIONS: Operators should conduct a content safety audit to verify that any fine-tuned or customized models built on Stability AI's infrastructure cannot be prompted to generate prohibited content. Legal teams should review mandatory reporting obligations applicable to their jurisdiction and platform type and establish an incident response procedure for detected violations.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over unfair and deceptive practices by platforms that fail to implement adequate child safety measures in consumer-facing AI products
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Stability AI Acceptable Use Policy
Entity
Stability AI
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 12, 2026
Record ID
CA-P-011533
Document ID
CA-D-00772
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
6fe74fd03c821a478b697f38b02deeafcbbb7b9353c5fd3ff39e20c43b1db53c
Analysis generated
May 11, 2026 13:00 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Stability AI
Document: Stability AI Acceptable Use Policy
Record ID: CA-P-011533
Captured: 2026-05-11 13:00:52 UTC
SHA-256: 6fe74fd03c821a47…
URL: https://conductatlas.com/platform/stability-ai/stability-ai-acceptable-use-policy/prohibition-on-child-sexual-abuse-material-csam/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Stability AI's Prohibition on Child Sexual Abuse Material (CSAM) clause do?

Generation of CSAM is a criminal offense in virtually all jurisdictions, and this prohibition establishes that any such use constitutes an immediate and absolute violation of the policy with no exceptions or contextual carve-outs.

How does this clause affect you?

Any user or developer who generates sexual content involving minors using Stability AI's services violates this provision and may have their access terminated; this prohibition also applies to operators who deploy Stability AI models in downstream applications and fail to prevent such generation.

How many platforms have this type of clause?

ConductAtlas has identified this type of provision across 1 platforms. See the full comparison.

Is ConductAtlas affiliated with Stability AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Stability AI.