Leonardo AI · Leonardo AI Privacy Policy

AI Model Training Use of User Content

High severity
Share 𝕏 Share in Share 🔒 PDF

What it is

Leonardo AI can use your text prompts and the images or videos you generate to train and improve its AI — though you can ask them to stop by emailing privacy@leonardo.ai.

Consumer impact (what this means for users)

This provision means that everything you type into Leonardo AI and every image or video you generate may be fed back into the company's AI training pipeline, contributing to its commercial product development.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Opt Out of Arbitration
    Send an email to privacy@leonardo.ai stating that you opt out of your prompts and generated content being used for AI model training. Include your account email address and a clear statement of your opt-out request.

Cross-platform context

See how other platforms handle AI Model Training Use of User Content and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

Your creative inputs and AI outputs are valuable training data for Leonardo AI's commercial models; without opting out, your activity directly contributes to a product you may not benefit from.

View original clause language
We may use the content you input into our Services (such as prompts) and the content generated by our Services in response to your inputs, to train and improve our AI models and Services. You may opt out of this use by contacting us at privacy@leonardo.ai.

Institutional analysis (Compliance & legal intelligence)

REGULATORY FRAMEWORK: This provision implicates GDPR Art. 6(1)(f) (legitimate interests as lawful basis for AI training), GDPR Art. 22 (automated decision-making), and Recital 47 (legitimate interests balancing test). Under CCPA/CPRA §1798.121, use of personal data for AI training may constitute a 'sensitive use' requiring opt-in consent in some contexts. The Australian Privacy Act 1988 (APP 3, 6) also governs collection and secondary use of personal information. Primary enforcement authorities are EU supervisory authorities (lead authority dependent on Leonardo AI's EU establishment), the California Privacy Protection Agency (CPPA), and the OAIC.

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    The FTC has jurisdiction over unfair or deceptive practices under FTC Act Section 5, including misleading disclosures about how consumer data is used in AI training.
    File a complaint →

Provision details

Document information
Document
Leonardo AI Privacy Policy
Entity
Leonardo AI
Document last updated
April 29, 2026
Tracking information
First tracked
April 30, 2026
Last verified
April 30, 2026
Record ID
CA-P-004194
Document ID
CA-D-00480
Evidence Provenance
Source URL
Wayback Machine
SHA-256
ac60ef265e1e05c94b28dd719ab4d9bf7339502e5ad85457006b8f18e885cc23
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Leonardo AI | Document: Leonardo AI Privacy Policy | Record: CA-P-004194
Captured: 2026-04-30 06:59:23 UTC | SHA-256: ac60ef265e1e05c9…
URL: https://conductatlas.com/platform/leonardo-ai/leonardo-ai-privacy-policy/ai-model-training-use-of-user-content/
Accessed: May 2, 2026
Classification
Severity
High
Categories

Other provisions in this document