Stability AI states that the prompts you type and the images or content generated from those prompts may be used to train and improve its AI systems.
This analysis describes what Stability AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision means that creative inputs and outputs produced during your use of Stability AI tools may become part of the data used to improve the company's AI models, which raises questions about consent, data minimization, and the scope of use beyond the immediate service interaction.
Interpretive note: The precise scope of what constitutes user inputs subject to AI training, and whether an opt-out mechanism exists, could not be fully confirmed from the available document text.
The terms authorize Stability AI to use your submitted prompts and generated outputs for AI model training purposes, which applies to all users unless applicable law or a specific opt-out mechanism limits this use in practice.
How other platforms handle this
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
After registration, you may create, upload or transmit files, documents, videos, images, data or information as part of your use of the Service (collectively, "User Content"). This includes any inputs you provide to our AI-powered support tools and outputs generated in response to your inputs. User ...
Monitoring
Stability AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We may use the content you submit to our Services, including prompts and generated outputs, to train, improve, and develop our AI models and Services.— Excerpt from Stability AI's Stability AI Privacy Policy
1) REGULATORY LANDSCAPE: This provision implicates GDPR Article 6 (lawful basis for processing) and Article 5 (purpose limitation and data minimization) for EU and UK users. The use of user-generated content for AI training may require a clearly identified lawful basis; legitimate interests assertions for this purpose have received scrutiny from EU supervisory authorities. The EU AI Act may impose additional transparency and documentation requirements for training data used in general-purpose AI models. The California Consumer Privacy Act and CPRA may also apply if this constitutes a secondary use of personal information beyond the disclosed primary purpose. 2) GOVERNANCE EXPOSURE: High. The use of user inputs for AI training is a materially significant secondary processing activity. If the policy does not clearly identify the lawful basis for this use under GDPR, or if users are not provided a meaningful opt-out, this creates regulatory exposure with UK ICO and EU supervisory authorities. The absence of explicit granular consent for AI training use is an area of active regulatory interest across multiple jurisdictions. 3) JURISDICTION FLAGS: EU and UK users face the highest exposure given GDPR and UK GDPR requirements for lawful basis and purpose limitation. California users may have CPRA rights regarding secondary use of personal information. The provision as stated applies globally, but enforcement and user rights vary materially by jurisdiction. 4) CONTRACT AND VENDOR IMPLICATIONS: For B2B customers using the Stability AI API, this provision may affect their own downstream data processing obligations, particularly if end-user data flows through the API. Enterprise customers should assess whether their own privacy notices and user agreements disclose this secondary use to their end users. Procurement teams should evaluate whether contractual data processing agreements with Stability AI adequately address this use case. 5) COMPLIANCE CONSIDERATIONS: Compliance teams should audit whether the consent mechanism or notice provided at point of data collection adequately discloses AI training as a purpose; review whether a Data Protection Impact Assessment has been conducted for this processing activity; assess whether users are provided a clear and accessible opt-out or objection mechanism; and verify that the lawful basis relied upon for this processing is documented in the company's Article 30 records of processing activities.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision means that creative inputs and outputs produced during your use of Stability AI tools may become part of the data used to improve the company's AI models, which raises questions about consent, data minimization, and the scope of use beyond the immediate service interaction.
The terms authorize Stability AI to use your submitted prompts and generated outputs for AI model training purposes, which applies to all users unless applicable law or a specific opt-out mechanism limits this use in practice.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Stability AI.