LinkedIn · LinkedIn Privacy Policy

AI and Generative AI Model Training

High severity
Share 𝕏 Share in Share 🔒 PDF

What it is

LinkedIn uses your personal data — including your profile, posts, and activity — to train its AI and generative AI systems. You can opt out, but you must actively do so in your settings.

Consumer impact (what this means for users)

Your LinkedIn profile content, posts, and activity data may be used to train generative AI models; this affects how your professional identity and intellectual contributions are commercially exploited by LinkedIn and Microsoft.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Opt Out of Arbitration
    Go to LinkedIn Privacy Settings at https://www.linkedin.com/psettings/privacy, find the 'Data for Generative AI Improvement' section, and toggle off the setting to opt out of your data being used to train generative AI models.

Cross-platform context

See how other platforms handle AI and Generative AI Model Training and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

Your professional content and behavior on LinkedIn may be used to build AI systems without your active knowledge, and the default setting may be opted in rather than opted out.

View original clause language
We use personal data and other data to train and improve LinkedIn and Microsoft AI/ML models for the recommendations and other features described in Section 2.4, as well as generative AI features. We explain how you can opt out of your personal data being used to train generative AI models in Section 4.

Institutional analysis (Compliance & legal intelligence)

REGULATORY FRAMEWORK: This provision implicates GDPR Art. 6(1)(f) (legitimate interests as lawful basis for AI training), Art. 21 (right to object to processing based on legitimate interests), Art. 22 (automated decision-making), and Recital 47 (legitimate interests balancing test). The EU AI Act (Regulation 2024/1689) may apply to high-risk AI systems in employment and recruitment contexts. The Irish DPC is the lead supervisory authority for EU/EEA processing. CCPA/CPRA §1798.121 (sensitive personal information opt-out) and §1798.100 (right to opt out of sale/sharing) are relevant for California residents.

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    The FTC has enforcement authority under FTC Act Section 5 over unfair or deceptive data practices, including undisclosed use of consumer data for AI training purposes.
    File a complaint →

Provision details

Document information
Document
LinkedIn Privacy Policy
Entity
LinkedIn
Document last updated
April 29, 2026
Tracking information
First tracked
April 28, 2026
Last verified
April 28, 2026
Record ID
CA-P-003974
Document ID
CA-D-00090
Evidence Provenance
Source URL
Wayback Machine
SHA-256
ce4e84ffc9e0fc98014761639e090fc61c45e8e9f63dbb4873f713aea4017044
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: LinkedIn | Document: LinkedIn Privacy Policy | Record: CA-P-003974
Captured: 2026-04-28 09:45:05 UTC | SHA-256: ce4e84ffc9e0fc98…
URL: https://conductatlas.com/platform/linkedin/linkedin-privacy-policy/ai-and-generative-ai-model-training/
Accessed: May 2, 2026
Classification
Severity
High
Categories

Other provisions in this document