Provision Registry

3422 classified provisions across 277 platforms — browse, filter, and compare.

Every clause classified by type, severity, and platform. Updated as policies change.

Start Professional free trial Track specific clauses across platforms with provision-level alerts.
Filtering: High × Clear all
Mistral AI · Mistral AI Terms of Service
Your conversations may contribute to improving Mistral AI's models by default on free and some paid plans, meaning the things you type into the service could be reviewed and incorporated into future AI training unless you take action to opt out.
CA-P-010130 First tracked May 11, 2026 Last seen May 12, 2026 Compare across platforms →
Anthropic · Anthropic Privacy Policy
This provision directly determines whether everything you share with Claude, including personal details, work content, and private thoughts, becomes training material for Anthropic's AI systems.
CA-P-007136 First tracked May 9, 2026 Last seen May 9, 2026 Compare across platforms →
Writer · Writer Privacy Policy
This is the single most important protection for enterprise users who submit proprietary, confidential, or sensitive business data to Writer's AI tools — it means your content is not being used to make the AI smarter for others.
CA-P-005911 First tracked May 8, 2026 Last seen May 8, 2026 Compare across platforms →
OpenAI · Privacy Policy (ROW)
Anything you share with ChatGPT — including sensitive personal, medical, or financial details — could become part of OpenAI's AI training dataset unless you proactively opt out.
CA-P-000044 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
OpenAI · OpenAI Privacy Policy
This means sensitive information you share with ChatGPT — including health details, financial concerns, or personal problems — could be used to train AI systems and potentially seen by OpenAI employees who review training data.
CA-P-003156 First tracked Apr 27, 2026 Last seen Apr 27, 2026 Compare across platforms →
Mistral AI · Mistral AI Privacy Policy
Free users may not realise their private conversations are being used to build commercial AI products unless they proactively opt out.
CA-P-004351 First tracked Apr 30, 2026 Last seen Apr 30, 2026 Compare across platforms →
PayPal · PayPal Privacy Statement
Using customer financial data to train AI models without a clear opt-out is a novel and contested practice; automated decisions affecting account access or creditworthiness can harm consumers without transparent human review.
CA-P-002262 First tracked Apr 5, 2026 Last seen Apr 10, 2026 Compare across platforms →
OpenAI · Privacy Policy (ROW)
Sensitive personal information you share in conversations — including health questions, financial details, or private communications — could be used to shape future AI behavior.
CA-P-001981 First tracked Apr 4, 2026 Last seen Apr 9, 2026 Compare across platforms →
Suno · Suno Privacy Policy
This means content you create or upload, including music prompts and generated songs, may feed back into Suno's AI training pipeline without requiring your explicit, specific consent, which is a materially different standard than opt-in consent.
CA-P-004398 First tracked Apr 30, 2026 Last seen May 12, 2026 Compare across platforms →
OpenAI · Privacy Policy (ROW)
Conversations with ChatGPT can include sensitive personal information — health questions, financial details, relationship issues — and using this content for model training without opt-in consent raises significant privacy risks.
CA-P-003131 First tracked Apr 27, 2026 Last seen Apr 27, 2026 Compare across platforms →
Perplexity AI · Perplexity Privacy Policy
Search queries often contain sensitive personal information about health, finances, relationships, or legal issues — using these as AI training data without explicit opt-in consent creates real privacy risks.
CA-P-006923 First tracked May 8, 2026 Last seen May 8, 2026 Compare across platforms →
high Privacy rights
Google Gemini · Gemini Apps Privacy Notice
An opt-out model for AI training data use means most users' conversations contribute to AI model development without their active knowledge or consent, which raises significant concerns under GDPR's requirements for a valid lawful basis for processing.
CA-P-001916 First tracked Apr 4, 2026 Last seen Apr 9, 2026 Compare across platforms →
Character.AI · Character.ai Privacy Policy
Users engaging in potentially personal or sensitive conversations with AI characters may not fully appreciate that their messages and voice inputs can become training material for commercial AI models.
CA-P-010330 First tracked May 11, 2026 Last seen May 12, 2026 Compare across platforms →
Tabnine · Tabnine Privacy Policy
Code you type into your IDE may contain proprietary algorithms, API keys, or sensitive business logic that could be incorporated into AI training datasets, creating intellectual property and confidentiality risks.
CA-P-004221 First tracked Apr 30, 2026 Last seen Apr 30, 2026 Compare across platforms →
OpenAI · OpenAI Privacy Policy
This provision is operationally significant because it means that conversational inputs, which may include personal, professional, or sensitive information, may be incorporated into AI model training unless the user actively disables the setting.
CA-P-011503 First tracked May 12, 2026 Last seen May 12, 2026 Compare across platforms →
Google Gemini · Gemini Apps Privacy Notice
Even if you opt out of saving your conversation history, Google still uses your chat data to train its AI — meaning there is no complete opt-out from AI training data use available through standard account settings.
CA-P-002371 First tracked Apr 9, 2026 Last seen Apr 10, 2026 Compare across platforms →
Zoom · Zoom Privacy Statement
The distinction between account-owner consent and individual participant consent means that employees and meeting guests may have their meeting content used to inform AI model training based on a decision made by their employer or host, without individual opt-in.
CA-P-009831 First tracked May 10, 2026 Last seen May 11, 2026 Compare across platforms →
Anthropic · Claude.ai Terms of Service
This provision means that even users who opt out of training cannot fully prevent their conversation data from being used in AI model development under certain circumstances, which has implications for personal data shared in conversations.
CA-P-009315 First tracked May 10, 2026 Last seen May 12, 2026 Compare across platforms →
Perplexity AI · Perplexity AI Privacy Policy
Users frequently ask sensitive personal questions on AI search platforms without realizing those queries could be stored and used to train commercial AI systems, creating privacy risks especially for health, legal, or financial queries.
CA-P-005010 First tracked May 7, 2026 Last seen May 7, 2026 Compare across platforms →
Synthesia · Synthesia Privacy Policy
This means your likeness and voice could be used commercially to develop AI products beyond your own videos, and the opt-out is not automatic — you must proactively contact Synthesia to prevent this.
CA-P-004282 First tracked Apr 30, 2026 Last seen Apr 30, 2026 Compare across platforms →
Stability AI · Stability AI Privacy Policy
Most users do not expect their creative prompts to be used as training data for commercial AI systems, and this use may be difficult to undo once data is incorporated into model weights.
CA-P-003724 First tracked Apr 28, 2026 Last seen Apr 28, 2026 Compare across platforms →
Ideogram · Ideogram Privacy Policy
This means your creative inputs — the ideas you describe in prompts — become training material for a commercial AI system, which most users do not expect when generating images for personal use.
CA-P-004442 First tracked May 2, 2026 Last seen May 2, 2026 Compare across platforms →
Luma AI · Luma AI Privacy Policy
Your personal creative content — including potentially identifiable images, videos, and conversations — may permanently shape Luma's commercial AI products with no clear mechanism to withdraw consent for this specific purpose.
CA-P-006372 First tracked May 8, 2026 Last seen May 8, 2026 Compare across platforms →
Replit · Replit Privacy Policy
Users building proprietary software or working with sensitive business logic may inadvertently contribute that content to Replit's AI training data without a clear, granular opt-out mechanism.
CA-P-004425 First tracked Apr 30, 2026 Last seen Apr 30, 2026 Compare across platforms →
Perplexity AI · Perplexity Privacy Policy
This means your queries, including potentially sensitive ones about health, finances, or personal matters, could become part of the data used to build Perplexity's AI models.
CA-P-010346 First tracked May 11, 2026 Last seen May 12, 2026 Compare across platforms →
AI21 Labs · AI21 Labs Privacy Policy
Your private queries and creative or business inputs submitted to AI21's platform may become training data for future AI systems, raising concerns about confidentiality, intellectual property, and data sovereignty.
CA-P-004111 First tracked Apr 30, 2026 Last seen Apr 30, 2026 Compare across platforms →
OpenAI · Privacy Policy (ROW)
This means everything you type into ChatGPT — including personal details, health concerns, financial questions, or private matters — could become training data for future AI systems unless you explicitly opt out.
CA-P-002436 First tracked Apr 9, 2026 Last seen Apr 10, 2026 Compare across platforms →
Inflection AI · Inflection AI Privacy Policy
Most people do not expect that the details they share in a private conversation could be retained and used as training data; this is especially significant if you have shared sensitive personal, health, financial, or emotional information with the AI.
CA-P-008928 First tracked May 10, 2026 Last seen May 12, 2026 Compare across platforms →
Anthropic · Anthropic Privacy Policy
The safety-review exception means your opt-out does not fully protect your conversations from being used in AI training, which is a meaningful limitation that may not be obvious to most users.
CA-P-002125 First tracked Apr 4, 2026 Last seen Apr 4, 2026 Compare across platforms →
Anthropic · Anthropic Privacy Policy
Your private conversations with Claude — including sensitive personal topics — may become training data for AI models, and the opt-out has significant exceptions that most users will not anticipate.
CA-P-002561 First tracked Apr 9, 2026 Last seen Apr 9, 2026 Compare across platforms →

Professional Governance Intelligence

Monitor specific governance provisions across platforms.

Professional includes provision-level monitoring, regulatory mapping, and audit-ready analysis.

Start free Start Professional free trial