Thomson Reuters may use your personal information to train its AI and machine learning systems, including those built into its products, and states it will seek consent where the law requires it.
This analysis describes what Thomson Reuters's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision means personal data you provide, or that Thomson Reuters collects about you, could be used to build AI systems, raising questions about what data is used, for how long, and whether individuals have effective control over that use.
Interpretive note: The provision conditions consent on applicable law rather than applying it universally, creating jurisdictional variance in when and how this use is permissible.
Your personal information, potentially including professional history, correspondence content, or research queries, may be used to develop or improve AI models embedded in Thomson Reuters products, with consent sought only where applicable law mandates it rather than as a default practice.
How other platforms handle this
We use your personal data to develop, train, and improve our artificial intelligence and machine learning models. This includes using your transaction data, behavioral data, and interaction data to enhance our fraud detection, credit assessment, and personalization capabilities. We take steps to pro...
We use Personal Data to detect and prevent fraud, and to develop and improve our fraud detection models and other machine learning systems. This may include using transaction data, device information, and other Personal Data to train and refine our systems.
engage in any of the foregoing in connection with any use, creation, development, modification, prompting, fine-tuning, training, testing, benchmarking or validation of any artificial intelligence or machine learning tool, model, system, algorithm, product or other technology ("AI Tool").
Monitoring
Thomson Reuters has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We may use the personal information we collect to develop, train, and improve artificial intelligence and machine learning models, including those used in our products and services. Where required by applicable law, we will obtain your consent before using your personal information for AI training purposes.— Excerpt from Thomson Reuters's Thomson Reuters Privacy
REGULATORY LANDSCAPE: This provision engages GDPR Articles 5 (purpose limitation, data minimisation), 6 (lawful basis), 9 (sensitive data), and 22 (automated decision-making), as well as CCPA and CPRA provisions on secondary use of personal information. The EU AI Act, once fully applicable, may impose additional transparency and data governance obligations on AI training practices. The UK ICO has issued guidance on AI and data protection that compliance teams should consult. The FTC has also signalled scrutiny of secondary uses of personal data for AI training under Section 5 unfair or deceptive practices authority. GOVERNANCE EXPOSURE: High. The use of personal data for AI training as a secondary purpose may not be consistent with the original purpose for which data was collected, creating tension with GDPR's purpose limitation principle. Where the legitimate interests basis is relied upon, a documented Legitimate Interests Assessment is advisable. For sensitive personal data categories, additional safeguards under GDPR Article 9 apply and consent may be required regardless of other bases. JURISDICTION FLAGS: EU and EEA users face the highest exposure given GDPR's strictness on secondary processing and sensitive data. California users may have CPRA rights to limit the use of sensitive personal information. Illinois users should note BIPA implications if biometric data is involved in any AI training pipeline. The UK ICO's AI guidance creates additional obligations for UK-facing processing. CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers whose employees' or clients' data is processed by Thomson Reuters as a data processor should review their Data Processing Agreements to confirm AI training is either excluded or subject to explicit authorisation. Standard DPA templates may not address this use case, and procurement teams should flag this as a contract review trigger. Liability for unauthorised secondary use could fall on both the processor and the controller depending on DPA terms. COMPLIANCE CONSIDERATIONS: Compliance teams should audit which Thomson Reuters products are used and what personal data flows into those products, assess whether current consent mechanisms or legitimate interests assessments cover AI training use, and update data mapping documentation to reflect this secondary processing purpose. Organisations subject to GDPR should consider whether a Data Protection Impact Assessment is required for high-risk AI processing involving their users' data.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision means personal data you provide, or that Thomson Reuters collects about you, could be used to build AI systems, raising questions about what data is used, for how long, and whether individuals have effective control over that use.
Your personal information, potentially including professional history, correspondence content, or research queries, may be used to develop or improve AI models embedded in Thomson Reuters products, with consent sought only where applicable law mandates it rather than as a default practice.
ConductAtlas has identified this type of provision across 1 platforms. See the full comparison.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Thomson Reuters.