Provision Registry

3422 classified provisions across 277 platforms — browse, filter, and compare.

Every clause classified by type, severity, and platform. Updated as policies change.

Start Professional free trial Track specific clauses across platforms with provision-level alerts.
Filtering: High × Clear all
PayPal · PayPal Privacy Statement
Using customer data to train AI models raises questions about the scope of consent provided and how long such data is retained for training purposes, which may have implications under GDPR's purpose limitation and data minimization principles.
CA-P-007899 First tracked May 10, 2026 Last seen May 11, 2026 Compare across platforms →
PayPal · PayPal Privacy Statement
This provision discloses that personal information, including financial and transaction data, is used to train AI models, and that automated decision-making is applied to fraud and risk assessments that may have consequences for account access and service availability.
CA-P-003928 First tracked Apr 28, 2026 Last seen May 12, 2026 Compare across platforms →
Windsurf · Windsurf Privacy Policy
This provision states that content users enter into Windsurf, which may include proprietary code, sensitive queries, or personal information, can be retained and used to train the company's AI systems beyond the immediate session.
CA-P-004016 First tracked Apr 30, 2026 Last seen May 12, 2026 Compare across platforms →
Windsurf · Windsurf Privacy Policy
Your actual coding questions, code snippets, and AI conversations may become training data, potentially including sensitive or proprietary code you did not intend to share beyond the immediate session.
CA-P-008823 First tracked May 10, 2026 Last seen May 11, 2026 Compare across platforms →
OpenAI · OpenAI Privacy Policy
This is an opt-out rather than opt-in default, meaning your conversation content is used for model training unless you take active steps to stop it, which many users may not be aware of.
CA-P-009765 First tracked May 10, 2026 Last seen May 11, 2026 Compare across platforms →
Stability AI · Stability AI Privacy Policy
This provision means that creative inputs and outputs produced during your use of Stability AI tools may become part of the data used to improve the company's AI models, which raises questions about consent, data minimization, and the scope of use beyond the immediate service interaction.
CA-P-011443 First tracked May 12, 2026 Last seen May 12, 2026 Compare across platforms →
high Ai automated
Perplexity AI · Perplexity Enterprise Terms
This provision means that if an employee or end user relies on a Perplexity AI output that turns out to be incorrect, the enterprise customer has no warranty claim against Perplexity. The terms place the entire risk of AI output accuracy on the customer.
CA-P-010724 First tracked May 11, 2026 Last seen May 12, 2026 Compare across platforms →
high Liability limitation
Google Gemini · Google Generative AI Prohibited Use Policy
This clause means if Gemini gives you wrong medical, legal, or financial advice and you act on it, Google bears no responsibility for any harm you suffer.
CA-P-003067 First tracked Apr 18, 2026 Last seen Apr 18, 2026 Compare across platforms →
OpenAI · Privacy Policy (ROW)
If ChatGPT generates false information about a real person — including you — OpenAI disclaims liability, which raises serious concerns about AI-generated defamation, inaccurate financial or medical advice, and the right to correct AI-generated falsehoods.
CA-P-002442 First tracked Apr 9, 2026 Last seen Apr 10, 2026 Compare across platforms →
Writer · Writer Terms of Service
If you rely on Writer's AI output in a business decision, legal filing, medical recommendation, or any other consequential context and it turns out to be wrong, Writer bears no liability — you do.
CA-P-006032 First tracked May 8, 2026 Last seen May 8, 2026 Compare across platforms →
high Intellectual property
Tabnine · Tabnine Terms of Use
Code you submit to Tabnine — including proprietary business logic or sensitive algorithms — may be used to train or improve Tabnine's AI models, which could raise IP confidentiality and trade secret concerns for enterprise users.
CA-P-004139 First tracked Apr 30, 2026 Last seen Apr 30, 2026 Compare across platforms →
GitHub · GitHub Privacy Statement
The policy authorizes use of user data for AI product development, which may include training or improving machine learning models; the full scope of this use is not entirely defined within this document and requires review of separate product terms.
CA-P-011301 First tracked May 12, 2026 Last seen May 12, 2026 Compare across platforms →
high Liability limitation
Airbnb · Airbnb Terms of Service
This clause significantly limits what you can recover from Airbnb if something goes wrong — such as a listing being materially different from its description, a safety incident at a property, or a host or guest causing damage — even if Airbnb's platform facilitated the harm.
CA-P-002762 First tracked Apr 18, 2026 Last seen Apr 18, 2026 Compare across platforms →
Pika · Pika Terms of Service
Holding users personally liable for autonomous AI-generated content they did not directly produce is an unusual and potentially unfair risk allocation that could expose users to liability for AI hallucinations, defamatory statements, or harmful content generated by their AI Self without their knowledge.
CA-P-004434 First tracked Apr 30, 2026 Last seen Apr 30, 2026 Compare across platforms →
Pika · Pika Terms of Service
If your AI Self generates harmful, defamatory, or misleading content in autonomous interactions with other users or on social media, these terms assert that you bear sole responsibility for those outputs, which could create unexpected legal exposure for the account holder.
CA-P-007563 First tracked May 9, 2026 Last seen May 12, 2026 Compare across platforms →
Dun & Bradstreet · D&B Privacy Policy
AI systems used in commercial credit scoring, risk decisioning, and business intelligence can have material impacts on businesses and individuals, and a voluntary self-certification may not provide sufficient independent oversight or recourse.
CA-P-005081 First tracked May 7, 2026 Last seen May 7, 2026 Compare across platforms →
high Ai automated
Microsoft Azure · Microsoft Privacy
This means conversations you have with Xbox or other Microsoft AI features are not private — they may be stored, reviewed by humans, and used to build Microsoft's AI products.
CA-P-003189 First tracked Apr 27, 2026 Last seen Apr 27, 2026 Compare across platforms →
high Data usage
OpenAI · Terms of Use (ROW)
Your messages, creative work, and uploaded files may be used to train OpenAI's AI systems, potentially influencing future AI outputs and having your data persist in model weights even after you delete your account.
CA-P-000056 First tracked Apr 3, 2026 Last seen Apr 3, 2026 Compare across platforms →
high Data usage
TikTok · TikTok Terms of Service
Your creative content becomes training data for TikTok's AI products without any compensation, and this right is irrevocable — meaning even if you delete your content or account, TikTok may have already used it to train models that continue to benefit the company.
CA-P-002453 First tracked Apr 9, 2026 Last seen Apr 27, 2026 Compare across platforms →
high Ai automated
Descript · Descript Terms of Service
This license covers your voice recordings and video, which Descript may use to train AI models including voice cloning features, a use most users would not anticipate from an editing app.
CA-P-005340 First tracked May 7, 2026 Last seen May 7, 2026 Compare across platforms →
high Data usage
LinkedIn · LinkedIn User Agreement
Your personal content and professional information could be used to build AI systems, potentially without your active awareness, and this use extends to third-party AI tools LinkedIn works with.
CA-P-000656 First tracked Apr 3, 2026 Last seen Apr 3, 2026 Compare across platforms →
high Ai automated
Instacart · Instacart Terms of Service
This provision permanently transfers broad intellectual property rights over your uploaded content to Instacart for AI development purposes, with no compensation and no expiration, even if you later delete your account.
CA-P-003407 First tracked Apr 27, 2026 Last seen Apr 27, 2026 Compare across platforms →
Leonardo AI · Leonardo AI Terms of Service
Your creative prompts and generated content may be used commercially by Leonardo.Ai to improve its product, potentially without meaningful compensation or notification each time this occurs.
CA-P-004006 First tracked Apr 30, 2026 Last seen Apr 30, 2026 Compare across platforms →
Runway · Runway Terms of Service
This provision means your creative prompts and AI-generated videos become permanent training data for Runway's AI, with no ability to withdraw consent even if you delete your account.
CA-P-004083 First tracked Apr 30, 2026 Last seen Apr 30, 2026 Compare across platforms →
Ideogram · Ideogram Terms of Service
Your creative inputs, prompts, and generated images can be permanently incorporated into Ideogram's AI training datasets, meaning your content shapes future commercial AI products without compensation or ongoing control.
CA-P-004065 First tracked Apr 30, 2026 Last seen Apr 30, 2026 Compare across platforms →
Pika · Pika Terms of Service
Your voice recordings, facial likeness, and personal characteristics submitted to Pika can be used to train AI models — this type of biometric data collection carries significant privacy risks and is regulated by specific state laws with private rights of action.
CA-P-004433 First tracked Apr 30, 2026 Last seen Apr 30, 2026 Compare across platforms →
Pika · Pika Terms of Service
Your voice recordings and likeness data are biometric-adjacent personal information; their use for AI training and autonomous AI Self creation raises significant privacy concerns and may trigger specific legal protections depending on your state of residence.
CA-P-007562 First tracked May 9, 2026 Last seen May 12, 2026 Compare across platforms →
high Ai automated
Slack · Slack Terms of Service
Confidential business communications, personal employee data, and sensitive files shared in Slack workspaces could be used by Slack to train AI models unless the organization takes affirmative steps to opt out.
CA-P-003511 First tracked Apr 27, 2026 Last seen Apr 27, 2026 Compare across platforms →
high Data usage
Microsoft · Microsoft Privacy Statement (Legacy)
As AI becomes central to Microsoft's products, the use of your personal conversations, documents, and inputs to train AI models represents a significant and growing use of personal data that many users may not anticipate.
CA-P-000001 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
high Data usage
Microsoft Azure · Microsoft Privacy
Your personal data — including things you type, say, or create — may be used to improve Microsoft's AI systems, often without a clear opt-out mechanism surfaced at the point of collection.
CA-P-000157 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →

Professional Governance Intelligence

Monitor specific governance provisions across platforms.

Professional includes provision-level monitoring, regulatory mapping, and audit-ready analysis.

Start free Start Professional free trial