Provision Registry

3422 classified provisions across 277 platforms — browse, filter, and compare.

Every clause classified by type, severity, and platform. Updated as policies change.

Start Professional free trial Track specific clauses across platforms with provision-level alerts.
Filtering: High × Clear all
high Ai automated
Microsoft · Microsoft Privacy Statement (Legacy)
Prompts you send to Copilot or other AI features may contain sensitive personal or professional information, and the statement indicates this interaction data can be retained and used for model improvement, which is a materially different use than simply answering your question.
CA-P-008961 First tracked May 10, 2026 Last seen May 11, 2026 Compare across platforms →
high Ai automated
Microsoft Azure · Microsoft Privacy
Users interacting with AI features may not realize that their prompts and AI-generated responses can be collected and used for product improvement, which could include sensitive or confidential content depending on how the feature is used.
CA-P-007942 First tracked May 10, 2026 Last seen May 12, 2026 Compare across platforms →
Microsoft · Microsoft Privacy Statement (Legacy)
Your AI conversations with Microsoft tools may contain sensitive personal, professional, or confidential information, and this data could be reviewed by Microsoft employees or used to train AI models.
CA-P-003850 First tracked Apr 28, 2026 Last seen Apr 28, 2026 Compare across platforms →
high Ai automated
Microsoft · Microsoft Privacy Statement (Legacy)
This provision discloses that content you type, speak, or share with Copilot and other AI features may be retained and used for product improvement purposes, which may include use beyond the immediate interaction.
CA-P-010757 First tracked May 11, 2026 Last seen May 11, 2026 Compare across platforms →
LinkedIn · LinkedIn Privacy Policy
Your professional content and behavior on LinkedIn may be used to build AI systems without your active knowledge, and the default setting may be opted in rather than opted out.
CA-P-003974 First tracked Apr 28, 2026 Last seen Apr 28, 2026 Compare across platforms →
LinkedIn · LinkedIn Privacy Policy
AI training data use is a significant and relatively recent category of data processing that extends beyond traditional service delivery purposes; the opt-out rather than opt-in structure means your data is used by default.
CA-P-007850 First tracked May 9, 2026 Last seen May 11, 2026 Compare across platforms →
LinkedIn · LinkedIn Privacy Policy
This provision authorizes LinkedIn to use the professional content and data you contribute to the platform to develop and improve AI products, including sharing with its parent company Microsoft, which may extend the use of your data beyond the LinkedIn platform itself.
CA-P-002148 First tracked Apr 4, 2026 Last seen May 12, 2026 Compare across platforms →
LinkedIn · LinkedIn Privacy Policy
This provision means your professional history, opinions, posts, and behavioral data may be permanently incorporated into AI models, with real implications for how your data is used beyond your direct interactions with LinkedIn.
CA-P-002588 First tracked Apr 9, 2026 Last seen Apr 10, 2026 Compare across platforms →
LinkedIn · LinkedIn User Agreement
This license is broad and perpetual, meaning LinkedIn can use your professional content, name, image, and likeness to train AI models even after you delete it from the platform if it has already been shared.
CA-P-002594 First tracked Apr 9, 2026 Last seen Apr 10, 2026 Compare across platforms →
Pinterest · Pinterest Privacy Policy
Your personal browsing habits, pins, and interactions could be used to train AI systems, which raises concerns about consent and the long-term use of your data beyond the original purpose.
CA-P-000686 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
Slack · Slack Privacy Policy
Your workplace messages could potentially be used to train Slack's AI models, and whether this happens depends on your employer's contract settings — individual users typically have no direct control over this.
CA-P-001017 First tracked Apr 3, 2026 Last seen May 7, 2026 Compare across platforms →
Yelp · Yelp Privacy Policy
This means your personal data and content contributions could be used to train AI models, potentially without a clear way to opt out, and may be shared with third-party AI companies whose own privacy practices apply.
CA-P-005879 First tracked May 8, 2026 Last seen May 8, 2026 Compare across platforms →
Dropbox · Dropbox Privacy Policy
Your files and behavior on Dropbox could be used to develop AI systems, raising questions about consent, data minimization, and the scope of use beyond storage.
CA-P-001035 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
Atlassian · Atlassian Cloud Terms
This authorization permits Atlassian to use customer-submitted content and usage data for AI model training and product improvement purposes, which may be material for organizations with confidential data, regulated data, or sector-specific data handling obligations.
CA-P-010939 First tracked May 12, 2026 Last seen May 12, 2026 Compare across platforms →
ZipRecruiter · ZipRecruiter Privacy Policy
Automated employment matching systems can encode bias and affect your job opportunities without human review, and the policy does not describe an explicit opt-out mechanism for AI-driven profiling.
CA-P-005492 First tracked May 7, 2026 Last seen May 7, 2026 Compare across platforms →
Zoom · Zoom Privacy Statement
This provision determines whether the content of your meetings, including things you say, type, or share, may be used to improve Zoom's AI products. Because the opt-out is assigned to account administrators rather than individual users or participants, individuals who join meetings on accounts they do not control cannot directly manage this setting.
CA-P-011088 First tracked May 12, 2026 Last seen May 12, 2026 Compare across platforms →
Cash App · Cash App Privacy Policy
Using customer financial and behavioral data to train AI systems without a clear opt-out is an emerging area of regulatory concern and may constitute secondary processing beyond what users reasonably expect when signing up for a payments app.
CA-P-000609 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
Amazon Marketplace · Amazon Privacy Notice
Using consumer data to train AI models is an emerging area of regulatory scrutiny globally, and consumers generally do not expect their personal interactions to permanently inform commercial AI systems, particularly without explicit consent.
CA-P-002107 First tracked Apr 4, 2026 Last seen Apr 9, 2026 Compare across platforms →
Pinterest · Pinterest Privacy Policy
Your personal content and interactions on Pinterest may be used to build AI systems without a standalone, specific opt-out mechanism, raising concerns about intellectual property, consent, and the scope of data use beyond the original purpose.
CA-P-003363 First tracked Apr 27, 2026 Last seen Apr 27, 2026 Compare across platforms →
Cash App · Cash App Privacy Policy
Your financial transactions, behavioral patterns, and personal information may be permanently incorporated into AI models that affect how Cash App treats you and other users, with no opt-out mechanism disclosed for this specific use.
CA-P-004577 First tracked May 7, 2026 Last seen May 7, 2026 Compare across platforms →
X · X Privacy Policy
Using personal data for AI model training raises questions about consent, data minimization, and the purposes for which data was originally collected, particularly under GDPR and emerging AI governance frameworks.
CA-P-009972 First tracked May 11, 2026 Last seen May 11, 2026 Compare across platforms →
Roblox · Roblox Privacy and Cookie Policy
Your messages, audio, and other content on Roblox may be used to train artificial intelligence systems, which is a significant secondary use of your personal data you may not have anticipated.
CA-P-000599 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
X · X Privacy Policy
This provision authorizes X to use broad categories of personal data, including content you create and how you interact with the platform, to develop and improve AI systems, which is a use that may extend beyond what users typically anticipate from a social media service.
CA-P-006637 First tracked May 8, 2026 Last seen May 12, 2026 Compare across platforms →
Thomson Reuters · Thomson Reuters Privacy
This provision means personal data you provide, or that Thomson Reuters collects about you, could be used to build AI systems, raising questions about what data is used, for how long, and whether individuals have effective control over that use.
CA-P-009348 First tracked May 10, 2026 Last seen May 12, 2026 Compare across platforms →
Grammarly · Grammarly Privacy Policy
The text you submit to Grammarly may include sensitive personal, professional, or confidential information, and using it for AI training goes significantly beyond the core service of providing writing suggestions.
CA-P-004129 First tracked Apr 30, 2026 Last seen Apr 30, 2026 Compare across platforms →
Cash App · Cash App Privacy Policy
The authorization to use personal data for AI training is explicit and broad, and the notice does not describe limits on which data categories may be used for this purpose or how long AI-trained models derived from user data are retained.
CA-P-011243 First tracked May 12, 2026 Last seen May 12, 2026 Compare across platforms →
LinkedIn · LinkedIn Privacy Policy
Your professional data and behavior on LinkedIn may be used to build AI systems that affect how content, jobs, and people are ranked and recommended across the platform.
CA-P-000646 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
Databricks · Databricks Privacy Notice
Using personal data for AI model training is a non-obvious secondary purpose that goes beyond service delivery — it can affect how AI systems behave and who benefits from your data contributions.
CA-P-006111 First tracked May 8, 2026 Last seen May 8, 2026 Compare across platforms →
ClickUp · ClickUp Privacy Policy
Business users managing confidential client projects inside ClickUp may unknowingly contribute that content to AI training datasets, potentially conflicting with their own confidentiality obligations.
CA-P-005176 First tracked May 7, 2026 Last seen May 7, 2026 Compare across platforms →
Synthesia · Synthesia Terms of Service
This restriction places the legal and ethical burden of obtaining consent directly on the customer, and failure to comply constitutes a breach of agreement that can trigger immediate suspension and indemnification obligations.
CA-P-008193 First tracked May 10, 2026 Last seen May 12, 2026 Compare across platforms →

Professional Governance Intelligence

Monitor specific governance provisions across platforms.

Professional includes provision-level monitoring, regulatory mapping, and audit-ready analysis.

Start free Start Professional free trial