Stability AI · Stability AI Privacy Policy

AI Model Training Use of User Content

High severity
Share 𝕏 Share in Share 🔒 PDF

What it is

Stability AI may use the prompts you type and the images or content you generate to train and improve its AI models, meaning your creative inputs become part of the company's AI development process.

Consumer impact (what this means for users)

Your prompts and generated outputs submitted to Stability AI's tools may be used to train future AI models, which is a significant secondary use of your personal creative content beyond the immediate service you requested.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Delete Your Data
    Within 30 days
    Email privacy@stability.ai requesting that your personal data, including prompts and outputs, be excluded from AI model training and/or deleted from Stability AI's systems. Include your account details and specify the nature of your request.

Cross-platform context

See how other platforms handle AI Model Training Use of User Content and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

Most users do not expect their creative prompts to be used as training data for commercial AI systems, and this use may be difficult to undo once data is incorporated into model weights.

View original clause language
We may use the content you submit to our Services, including prompts, inputs, and outputs, to improve and develop our AI models and Services. This may include using such content for training, fine-tuning, and evaluating our AI systems.

Institutional analysis (Compliance & legal intelligence)

REGULATORY FRAMEWORK: This provision implicates GDPR Art. 6(1)(f) (legitimate interests) and Art. 22 (automated decision-making), as well as UK GDPR equivalents enforced by the ICO. If the processing involves special category data embedded in prompts, Art. 9 applies. The EU AI Act (Regulation 2024/1689) Articles 10 and 53 impose data governance and transparency requirements on providers of general-purpose AI models trained on user data. The FTC Act Section 5 applies in the US context regarding unfair or deceptive data practices.

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    The FTC has authority under Section 5 of the FTC Act to regulate unfair or deceptive data practices, including undisclosed use of consumer content for AI training.
    File a complaint →

Provision details

Document information
Document
Stability AI Privacy Policy
Entity
Stability AI
Document last updated
April 29, 2026
Tracking information
First tracked
April 28, 2026
Last verified
April 28, 2026
Record ID
CA-P-003724
Document ID
CA-D-00330
Evidence Provenance
Source URL
Wayback Machine
SHA-256
ab8463c1a698bccc246c55dd2af2b3ea094ea7c70c2ca61b926c6b9eac014966
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Stability AI | Document: Stability AI Privacy Policy | Record: CA-P-003724
Captured: 2026-04-28 05:31:08 UTC | SHA-256: ab8463c1a698bccc…
URL: https://conductatlas.com/platform/stability-ai/stability-ai-privacy-policy/ai-model-training-use-of-user-content/
Accessed: May 2, 2026
Classification
Severity
High
Categories

Other provisions in this document