DeepL · DeepL Terms and Conditions

Anonymised Data Use for AI Training

Medium severity
Share 𝕏 Share in Share 🔒 PDF

What it is

DeepL analyses translation texts to improve its services, but claims to anonymise data before doing so. Paid DeepL Pro subscribers' content is specifically not used to train the translation AI.

Consumer impact (what this means for users)

Free users' translation content is subject to anonymised analysis for AI improvement purposes, while DeepL Pro subscribers' texts are explicitly excluded from model training — meaning the privacy protection for translation data depends on which subscription tier you use.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Export Your Data
    Contact DeepL's privacy team at privacy@deepl.com to submit a GDPR data subject access request or to request information about what translation data has been retained and processed.

Cross-platform context

See how other platforms handle Anonymised Data Use for AI Training and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

Free-tier users should be aware their translation content may be analysed to improve DeepL's AI systems, while Pro subscribers have an explicit carve-out — this is a meaningful distinction that affects which plan users should choose if data use is a concern.

View original clause language
To improve the quality of our services, we analyse texts submitted for translation. We ensure that this analysis cannot be traced back to individual users by anonymising the data before analysis. DeepL Pro subscribers' texts are not used to train our machine translation systems.

Institutional analysis (Compliance & legal intelligence)

REGULATORY FRAMEWORK: This provision engages GDPR Art. 5(1)(b) (purpose limitation), Art. 5(1)(c) (data minimisation), and Art. 89 (safeguards for processing for scientific or statistical purposes). The key question is whether anonymisation is effective under GDPR Recital 26 — if data can be re-identified, GDPR obligations apply in full. The EU AI Act (Regulation 2024/1689) may impose additional obligations regarding training data governance. Enforcement authority is the German BfDI and national DPAs for EU users; the ICO for UK users.

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    The FTC has jurisdiction over deceptive data practices under Section 5 of the FTC Act, including misrepresentations about how user data is used for AI training.
    File a complaint →

Provision details

Document information
Document
DeepL Terms and Conditions
Entity
DeepL
Document last updated
April 29, 2026
Tracking information
First tracked
April 30, 2026
Last verified
April 30, 2026
Record ID
CA-P-004043
Document ID
CA-D-00449
Evidence Provenance
Source URL
Wayback Machine
SHA-256
ba265be54e14f5920233dd37a414fbbfa00bc2d8d7db4b496cd94ec160bbf93f
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: DeepL | Document: DeepL Terms and Conditions | Record: CA-P-004043
Captured: 2026-04-30 05:34:54 UTC | SHA-256: ba265be54e14f592…
URL: https://conductatlas.com/platform/deepl/deepl-terms-and-conditions/anonymised-data-use-for-ai-training/
Accessed: May 2, 2026
Classification
Severity
Medium
Categories

Other provisions in this document