Stability AI · Stability AI Privacy Policy · View original document ↗

AI Model Training Using User Inputs

High severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Stability AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Stability AI states that the prompts you type and the images or content generated from those prompts may be used to train and improve its AI systems.

This analysis describes what Stability AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision means that creative inputs and outputs produced during your use of Stability AI tools may become part of the data used to improve the company's AI models, which raises questions about consent, data minimization, and the scope of use beyond the immediate service interaction.

Interpretive note: The precise scope of what constitutes user inputs subject to AI training, and whether an opt-out mechanism exists, could not be fully confirmed from the available document text.

Consumer impact (what this means for users)

The terms authorize Stability AI to use your submitted prompts and generated outputs for AI model training purposes, which applies to all users unless applicable law or a specific opt-out mechanism limits this use in practice.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Delete Your Data
    Email privacy@stability.ai to request that your data not be used for AI model training or to request deletion of your submitted prompts and outputs. Describe the specific data and processing activity you wish to restrict.

How other platforms handle this

Ideogram Medium

We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.

Windsurf Medium

We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...

Supabase Medium

After registration, you may create, upload or transmit files, documents, videos, images, data or information as part of your use of the Service (collectively, "User Content"). This includes any inputs you provide to our AI-powered support tools and outputs generated in response to your inputs. User ...

See all platforms with this clause type →

Monitoring

Stability AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
We may use the content you submit to our Services, including prompts and generated outputs, to train, improve, and develop our AI models and Services.

— Excerpt from Stability AI's Stability AI Privacy Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

1) REGULATORY LANDSCAPE: This provision implicates GDPR Article 6 (lawful basis for processing) and Article 5 (purpose limitation and data minimization) for EU and UK users. The use of user-generated content for AI training may require a clearly identified lawful basis; legitimate interests assertions for this purpose have received scrutiny from EU supervisory authorities. The EU AI Act may impose additional transparency and documentation requirements for training data used in general-purpose AI models. The California Consumer Privacy Act and CPRA may also apply if this constitutes a secondary use of personal information beyond the disclosed primary purpose. 2) GOVERNANCE EXPOSURE: High. The use of user inputs for AI training is a materially significant secondary processing activity. If the policy does not clearly identify the lawful basis for this use under GDPR, or if users are not provided a meaningful opt-out, this creates regulatory exposure with UK ICO and EU supervisory authorities. The absence of explicit granular consent for AI training use is an area of active regulatory interest across multiple jurisdictions. 3) JURISDICTION FLAGS: EU and UK users face the highest exposure given GDPR and UK GDPR requirements for lawful basis and purpose limitation. California users may have CPRA rights regarding secondary use of personal information. The provision as stated applies globally, but enforcement and user rights vary materially by jurisdiction. 4) CONTRACT AND VENDOR IMPLICATIONS: For B2B customers using the Stability AI API, this provision may affect their own downstream data processing obligations, particularly if end-user data flows through the API. Enterprise customers should assess whether their own privacy notices and user agreements disclose this secondary use to their end users. Procurement teams should evaluate whether contractual data processing agreements with Stability AI adequately address this use case. 5) COMPLIANCE CONSIDERATIONS: Compliance teams should audit whether the consent mechanism or notice provided at point of data collection adequately discloses AI training as a purpose; review whether a Data Protection Impact Assessment has been conducted for this processing activity; assess whether users are provided a clear and accessible opt-out or objection mechanism; and verify that the lawful basis relied upon for this processing is documented in the company's Article 30 records of processing activities.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over unfair or deceptive data practices under the FTC Act, which may apply if the scope of AI training data use is not clearly disclosed to users.
    File a complaint →

Applicable regulations

EU AI Act
European Union
California AB 2013 AI Training Data Transparency
US-CA
Colorado AI Act
US-CO
EU AI Act - High Risk Provisions
EU
GDPR
European Union
Texas AI Act
Texas, USA
Trump Executive Order on AI Policy Framework
US
UK GDPR
United Kingdom

Provision details

Document information
Document
Stability AI Privacy Policy
Entity
Stability AI
Document last updated
May 5, 2026
Tracking information
First tracked
April 28, 2026
Last verified
May 12, 2026
Record ID
CA-P-011443
Document ID
CA-D-00330
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
ab8463c1a698bccc246c55dd2af2b3ea094ea7c70c2ca61b926c6b9eac014966
Analysis generated
April 28, 2026 05:31 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Stability AI
Document: Stability AI Privacy Policy
Record ID: CA-P-011443
Captured: 2026-04-28 05:31:08 UTC
SHA-256: ab8463c1a698bccc…
URL: https://conductatlas.com/platform/stability-ai/stability-ai-privacy-policy/ai-model-training-using-user-inputs/
Accessed: May 14, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Related Analysis

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Stability AI's AI Model Training Using User Inputs clause do?

This provision means that creative inputs and outputs produced during your use of Stability AI tools may become part of the data used to improve the company's AI models, which raises questions about consent, data minimization, and the scope of use beyond the immediate service interaction.

How does this clause affect you?

The terms authorize Stability AI to use your submitted prompts and generated outputs for AI model training purposes, which applies to all users unless applicable law or a specific opt-out mechanism limits this use in practice.

Is ConductAtlas affiliated with Stability AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Stability AI.