Thomson Reuters · Thomson Reuters Privacy · View original document ↗

AI and Machine Learning Training Use of Personal Data

High severity Medium confidence Explicitdocumentlanguage Rare · 1 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Thomson Reuters Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Thomson Reuters may use your personal information to train its AI and machine learning systems, including those built into its products, and states it will seek consent where the law requires it.

This analysis describes what Thomson Reuters's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision means personal data you provide, or that Thomson Reuters collects about you, could be used to build AI systems, raising questions about what data is used, for how long, and whether individuals have effective control over that use.

Interpretive note: The provision conditions consent on applicable law rather than applying it universally, creating jurisdictional variance in when and how this use is permissible.

Consumer impact (what this means for users)

Your personal information, potentially including professional history, correspondence content, or research queries, may be used to develop or improve AI models embedded in Thomson Reuters products, with consent sought only where applicable law mandates it rather than as a default practice.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Delete Your Data
    Visit the Thomson Reuters privacy rights portal and submit a request to object to or restrict the use of your personal data for AI training purposes. Select the appropriate request type and provide details about your relationship with Thomson Reuters.

How other platforms handle this

Klarna Medium

We use your personal data to develop, train, and improve our artificial intelligence and machine learning models. This includes using your transaction data, behavioral data, and interaction data to enhance our fraud detection, credit assessment, and personalization capabilities. We take steps to pro...

Stripe Medium

We use Personal Data to detect and prevent fraud, and to develop and improve our fraud detection models and other machine learning systems. This may include using transaction data, device information, and other Personal Data to train and refine our systems.

Hulu Medium

engage in any of the foregoing in connection with any use, creation, development, modification, prompting, fine-tuning, training, testing, benchmarking or validation of any artificial intelligence or machine learning tool, model, system, algorithm, product or other technology ("AI Tool").

See all platforms with this clause type →

Monitoring

Thomson Reuters has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
We may use the personal information we collect to develop, train, and improve artificial intelligence and machine learning models, including those used in our products and services. Where required by applicable law, we will obtain your consent before using your personal information for AI training purposes.

— Excerpt from Thomson Reuters's Thomson Reuters Privacy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: This provision engages GDPR Articles 5 (purpose limitation, data minimisation), 6 (lawful basis), 9 (sensitive data), and 22 (automated decision-making), as well as CCPA and CPRA provisions on secondary use of personal information. The EU AI Act, once fully applicable, may impose additional transparency and data governance obligations on AI training practices. The UK ICO has issued guidance on AI and data protection that compliance teams should consult. The FTC has also signalled scrutiny of secondary uses of personal data for AI training under Section 5 unfair or deceptive practices authority. GOVERNANCE EXPOSURE: High. The use of personal data for AI training as a secondary purpose may not be consistent with the original purpose for which data was collected, creating tension with GDPR's purpose limitation principle. Where the legitimate interests basis is relied upon, a documented Legitimate Interests Assessment is advisable. For sensitive personal data categories, additional safeguards under GDPR Article 9 apply and consent may be required regardless of other bases. JURISDICTION FLAGS: EU and EEA users face the highest exposure given GDPR's strictness on secondary processing and sensitive data. California users may have CPRA rights to limit the use of sensitive personal information. Illinois users should note BIPA implications if biometric data is involved in any AI training pipeline. The UK ICO's AI guidance creates additional obligations for UK-facing processing. CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers whose employees' or clients' data is processed by Thomson Reuters as a data processor should review their Data Processing Agreements to confirm AI training is either excluded or subject to explicit authorisation. Standard DPA templates may not address this use case, and procurement teams should flag this as a contract review trigger. Liability for unauthorised secondary use could fall on both the processor and the controller depending on DPA terms. COMPLIANCE CONSIDERATIONS: Compliance teams should audit which Thomson Reuters products are used and what personal data flows into those products, assess whether current consent mechanisms or legitimate interests assessments cover AI training use, and update data mapping documentation to reflect this secondary processing purpose. Organisations subject to GDPR should consider whether a Data Protection Impact Assessment is required for high-risk AI processing involving their users' data.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has signalled enforcement interest in secondary uses of personal data for AI training under its unfair or deceptive practices authority, and this provision involves a secondary use of consumer data
    File a complaint →

Applicable regulations

GDPR
European Union

Provision details

Document information
Document
Thomson Reuters Privacy
Entity
Thomson Reuters
Document last updated
May 5, 2026
Tracking information
First tracked
May 8, 2026
Last verified
May 10, 2026
Record ID
CA-P-009348
Document ID
CA-D-00720
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
9e8a0f4bd1c9b41ed71cee58bb2f7847b755fb8dfc8390d88565630bf1f4db04
Analysis generated
May 8, 2026 05:17 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Thomson Reuters
Document: Thomson Reuters Privacy
Record ID: CA-P-009348
Captured: 2026-05-08 05:17:57 UTC
SHA-256: 9e8a0f4bd1c9b41e…
URL: https://conductatlas.com/platform/thomson-reuters/thomson-reuters-privacy/ai-and-machine-learning-training-use-of-personal-data/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Related Analysis

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Thomson Reuters's AI and Machine Learning Training Use of Personal Data clause do?

This provision means personal data you provide, or that Thomson Reuters collects about you, could be used to build AI systems, raising questions about what data is used, for how long, and whether individuals have effective control over that use.

How does this clause affect you?

Your personal information, potentially including professional history, correspondence content, or research queries, may be used to develop or improve AI models embedded in Thomson Reuters products, with consent sought only where applicable law mandates it rather than as a default practice.

How many platforms have this type of clause?

ConductAtlas has identified this type of provision across 1 platforms. See the full comparison.

Is ConductAtlas affiliated with Thomson Reuters?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Thomson Reuters.