Provision Registry

99 classified provisions across 105 platforms — browse, filter, and compare.

Every clause classified by type, severity, and platform. Updated as policies change.

Track provision changes Get alerts when clauses you care about change across platforms
Filtering: Ai automated × Clear all
Microsoft · Responsible AI Report 2025
Internal governance structures determine whether AI commitments are enforced in practice or remain aspirational, directly affecting the reliability of protections consumers and enterprises depend on.
CA-P-000034 First tracked Apr 3, 2026 Last seen Apr 10, 2026 Compare across platforms →
Microsoft · Microsoft Responsible AI Principles
Internal governance structures are increasingly required by law under the EU AI Act, but this page does not describe external audit rights, third-party verification, or …
CA-P-002088 First tracked Apr 4, 2026 Last seen Apr 9, 2026 Compare across platforms →
Microsoft · Responsible AI
The existence of named governance bodies creates an accountability structure that regulators and the public can reference — and their effectiveness (or lack thereof) will …
CA-P-002074 First tracked Apr 4, 2026 Last seen Apr 9, 2026 Compare across platforms →
Anthropic · Anthropic Usage Policy
Agentic AI guidelines represent a forward-looking regulatory posture that directly anticipates EU AI Act autonomous system requirements and reflects the novel risks of AI systems …
CA-P-002139 First tracked Apr 4, 2026 Last seen Apr 9, 2026 Compare across platforms →
Google · Google AI Principles
This self-acknowledged ambiguity creates significant governance risk: Google concedes that the boundary between permitted military AI work and prohibited weapons-adjacent AI is unclear, leaving substantial …
CA-P-001912 First tracked Apr 4, 2026 Last seen Apr 9, 2026 Compare across platforms →
medium Ai automated
Yelp · Yelp Privacy Policy
AI processing of personal data raises additional privacy risks including inferences about sensitive characteristics and potential for automated decision-making affecting users.
CA-P-001274 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
Apple · Apple App Store Review Guidelines
As AI-generated content becomes increasingly realistic, this requirement helps consumers distinguish between genuine and synthetic content, protecting against misinformation and manipulation.
CA-P-001968 First tracked Apr 4, 2026 Last seen Apr 10, 2026 Compare across platforms →
Epic Games · Epic Games Privacy Policy
User inputs to AI features may contain sensitive personal information, and the policy does not specify how long input data is retained, whether it is …
CA-P-000632 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
TikTok · TikTok Community Guidelines
Algorithmic recommendation systems on TikTok have been linked to amplification of harmful content to vulnerable users including minors, and the lack of transparent disclosure of …
CA-P-001856 First tracked Apr 3, 2026 Last seen Apr 3, 2026 Compare across platforms →
Robinhood · Robinhood Privacy Policy
Profile-based inferences about your financial behavior and psychology can influence what products you are offered and on what terms, with limited transparency or ability to …
CA-P-002209 First tracked Apr 4, 2026 Last seen Apr 10, 2026 Compare across platforms →
Hinge · Hinge Privacy Policy
Automated systems shape who you see and who sees you on Hinge, and EEA/UK users have the right under GDPR to contest automated decisions that …
CA-P-001237 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
Tinder · Tinder Privacy Policy
Automated decisions can affect your ability to use the service and who you are matched with, without meaningful human review or explanation of the criteria …
CA-P-001219 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
Bumble · Bumble Privacy Policy
Automated profiling can significantly affect your experience on the platform — including who you can connect with — and raises rights under GDPR Article 22 …
CA-P-001195 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
medium Ai automated
Google · Google AI Principles
Algorithmic bias in Google's AI systems — including Search AI Mode, Gemini, and automated decision tools — can cause real-world harm to individuals in employment, …
CA-P-000143 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
Google · Google AI Principles
This is the closest this document comes to granting users a right of recourse against AI decisions, but it is framed as a design aspiration …
CA-P-001910 First tracked Apr 4, 2026 Last seen Apr 9, 2026 Compare across platforms →
medium Ai automated
Google · Google AI Principles
Without specified safety testing standards, audit rights, or public disclosure of safety test results, this commitment cannot be independently verified by consumers, regulators, or enterprise …
CA-P-001909 First tracked Apr 4, 2026 Last seen Apr 9, 2026 Compare across platforms →
Fiverr · Fiverr Privacy Policy
Profiling can affect what opportunities, services, or prices you're shown on the platform, and raises concerns about fairness and transparency in automated decision-making.
CA-P-000870 First tracked Apr 3, 2026 Last seen Apr 10, 2026 Compare across platforms →
Spotify · Spotify Terms and Conditions
This disclosure reveals that Spotify's recommendation algorithm is not purely based on your listening habits — paid commercial relationships influence what content is promoted to …
CA-P-002171 First tracked Apr 4, 2026 Last seen Apr 9, 2026 Compare across platforms →
Microsoft · Responsible AI Report 2025
The quality and provenance of training data directly determines whether AI systems are fair, accurate, and respectful of privacy — poor data governance is a …
CA-P-000036 First tracked Apr 3, 2026 Last seen Apr 10, 2026 Compare across platforms →
medium Ai automated
Salesforce · Salesforce Terms of Service
As Salesforce integrates AI (including autonomous agents) into its products, this framework governs guardrails around AI behavior — which matters to any business relying on …
CA-P-001086 First tracked Apr 3, 2026 Last seen Apr 10, 2026 Compare across platforms →
Microsoft · Responsible AI
Biased AI can affect hiring decisions, credit scoring, healthcare access, and other high-stakes outcomes; Microsoft's commitment to fairness is a signal that these risks are …
CA-P-000023 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
Microsoft · Microsoft Responsible AI Principles
AI bias can lead to discriminatory outcomes in areas like hiring, lending, healthcare, and criminal justice — Microsoft's public commitment to fairness is relevant to …
CA-P-000169 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
Klarna · Klarna Privacy Policy
Automated fraud risk assessments can affect your ability to use Klarna's services or make purchases, and data shared with fraud prevention networks may persist for …
CA-P-000926 First tracked Apr 3, 2026 Last seen Apr 10, 2026 Compare across platforms →
Microsoft · Responsible AI
Human oversight is a critical safeguard against AI errors and harms, especially in high-stakes areas like healthcare, legal proceedings, and financial decisions.
CA-P-000024 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
medium Ai automated
Microsoft · Microsoft Responsible AI Principles
Human oversight is a critical safeguard ensuring that automated AI systems do not make consequential decisions — such as those affecting health, finances, or safety …
CA-P-000171 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
Spotify · Spotify Privacy Policy
These inferences form the basis for personalization and advertising targeting, meaning Spotify may act on assumptions about you that could be inaccurate, and you have …
CA-P-000328 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
Anthropic Claude · Claude.ai Terms of Service
The caveat 'if any' in Anthropic's assignment of Output rights reflects genuine legal uncertainty about whether AI-generated content is eligible for copyright protection, meaning you …
CA-P-000101 First tracked Apr 3, 2026 Last seen Apr 27, 2026 Compare across platforms →
OpenAI · GPT-4o System Card (PDF)
This is a direct acknowledgment that GPT-4o's safety behaviors are not fully robust to manipulation, which has implications for any deployment context where the model …
CA-P-000069 First tracked Apr 3, 2026 Last seen Apr 10, 2026 Compare across platforms →
OpenAI · GPT-4o System Card (PDF)
This means that when you interact with a GPT-4o-powered application, the safety settings you experience may be significantly different from ChatGPT's defaults — the business …
CA-P-000068 First tracked Apr 3, 2026 Last seen Apr 10, 2026 Compare across platforms →
OpenAI · GPT-4o System Card (PDF)
The entire safety assurance framework governing GPT-4o's release is self-administered, meaning there is no independent verification that OpenAI's risk ratings or mitigations are accurate or …
CA-P-000066 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →

Don't read every policy manually.

Get alerts when the clauses that matter to you change across 105 platforms.

Watcher — $9.99/mo Professional — $149/mo