Luma AI · Luma AI Privacy Policy

AI Model Training on User Content

High severity
Share 𝕏 Share in Share 🔒 PDF

What it is

Luma uses the images, videos, and AI chat conversations you submit to the platform to train and improve its AI models, and content you provide may appear in AI-generated outputs.

Consumer impact (what this means for users)

Your uploaded images, videos, and AI conversation text are used to train Luma's AI models, meaning sensitive or personal content you submit contributes to model development without an explicit opt-out mechanism.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Delete Your Data
    Email hello@lumalabs.ai requesting deletion of your personal data including uploaded content and conversation history. Identify yourself and specify the data you want deleted. Luma may ask you to verify your identity before processing the request.

Cross-platform context

See how other platforms handle AI Model Training on User Content and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

Users who submit personal, sensitive, or proprietary content in conversations or uploads may have that content used to train Luma's AI without a clear opt-out, and it could potentially surface in outputs generated for other users.

View original clause language
Train, develop, and improve the artificial intelligence, machine learning, and models that we use to support our Services; ... We collect the content of your conversations, including any information you choose to provide in your Inputs, and this information may be reproduced in the Outputs.

Institutional analysis (Compliance & legal intelligence)

(1) REGULATORY FRAMEWORK: This provision implicates GDPR Art. 6(1)(f) (legitimate interests as legal basis for AI training), Art. 13 (transparency about processing purposes), and Art. 22 (automated decision-making); UK GDPR equivalent provisions enforced by the ICO; CCPA/CPRA §1798.100 (right to know about data use) and §1798.120 (opt-out of sale/sharing) enforced by CPPA; FTC Act Section 5 for potentially unfair data practices if training use is not adequately disclosed. The EU AI Act (Regulation 2024/1689) may impose additional obligations depending on classification of Luma's AI systems as general-purpose AI models. (2)

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    The FTC has enforcement authority under Section 5 of the FTC Act over unfair or deceptive practices related to AI training data use and inadequate disclosure of how user content is processed.
    File a complaint →

Provision details

Document information
Document
Luma AI Privacy Policy
Entity
Luma AI
Document last updated
April 29, 2026
Tracking information
First tracked
April 30, 2026
Last verified
April 30, 2026
Record ID
CA-P-004290
Document ID
CA-D-00497
Evidence Provenance
Source URL
Wayback Machine
SHA-256
67674aa1a904b7c68bd20d464b6be4c1e518b1fe7e03c01dfdb4e87cfd26cb78
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Luma AI | Document: Luma AI Privacy Policy | Record: CA-P-004290
Captured: 2026-04-30 07:54:18 UTC | SHA-256: 67674aa1a904b7c6…
URL: https://conductatlas.com/platform/luma-ai/luma-ai-privacy-policy/ai-model-training-on-user-content/
Accessed: May 2, 2026
Classification
Severity
High
Categories

Other provisions in this document