Provision Registry

3422 classified provisions across 277 platforms — browse, filter, and compare.

Every clause classified by type, severity, and platform. Updated as policies change.

Start Professional free trial Track specific clauses across platforms with provision-level alerts.
Filtering: High × Clear all
high Intellectual property
Zoom · Zoom Terms of Service
This clause means your meeting conversations, shared files, and other content may be used to train AI systems, which raises significant privacy concerns especially in sensitive professional or personal contexts.
CA-P-006355 First tracked May 8, 2026 Last seen May 8, 2026 Compare across platforms →
high Ai automated
Zoom · Zoom Terms of Service
The agreement authorizes use of meeting and communication content, which may include audio, video, chat transcripts, and shared files, to develop and improve AI features, subject to consent and available opt-out mechanisms.
CA-P-011177 First tracked May 12, 2026 Last seen May 12, 2026 Compare across platforms →
Microsoft · Responsible AI
AI bias in Microsoft products used for hiring, lending, healthcare, or law enforcement can cause material harm to protected groups, and this commitment signals Microsoft's recognition of that risk — though it does not provide consumers with a direct remedy.
CA-P-002072 First tracked Apr 4, 2026 Last seen Apr 9, 2026 Compare across platforms →
Microsoft · Responsible AI
AI bias in consequential decisions — such as hiring, lending, or healthcare — can cause real harm, and this commitment is important, but it is a voluntary pledge without a consumer complaint mechanism or independent enforcement.
CA-P-002514 First tracked Apr 9, 2026 Last seen Apr 10, 2026 Compare across platforms →
Zoom · Zoom Privacy Statement
Your private meeting conversations, voice recordings, and transcripts could be used to improve Zoom's AI products unless someone actively opts out on your behalf — most users will not know this is happening.
CA-P-006532 First tracked May 8, 2026 Last seen May 8, 2026 Compare across platforms →
Figma · Figma Privacy Policy
Design files submitted to Figma's AI features may contain proprietary business information, client work, or sensitive intellectual property, and this clause authorizes Figma to use that material to improve its AI unless users take affirmative steps to opt out.
CA-P-010178 First tracked May 11, 2026 Last seen May 12, 2026 Compare across platforms →
high Ai automated
Notion · Notion Terms of Service
For anyone storing sensitive, confidential, or personal information in Notion, the AI terms are critical to understand before enabling AI features, as they may govern how that content is processed or used for model improvement.
CA-P-009750 First tracked May 10, 2026 Last seen May 11, 2026 Compare across platforms →
Replit · Replit Privacy Policy
Users who write proprietary, sensitive, or business-critical code on Replit should understand that their content may be used beyond their immediate project to improve Replit's AI systems, which could have implications for intellectual property and confidentiality.
CA-P-009497 First tracked May 10, 2026 Last seen May 11, 2026 Compare across platforms →
high Data usage
Miro · Miro Privacy Policy
AI features may involve additional data processing, including the use of board content to train or improve AI models, which raises distinct privacy considerations not covered by the main Privacy Policy.
CA-P-007868 First tracked May 9, 2026 Last seen May 12, 2026 Compare across platforms →
Miro · Miro Terms of Service
AI processing of board content by third-party AI providers creates significant data exposure risk — business strategies, personal data, and confidential information on Miro boards could be shared with external AI model operators.
CA-P-006189 First tracked May 8, 2026 Last seen May 8, 2026 Compare across platforms →
Strava · Strava Privacy Policy
The use of sensitive health and location data to train and run AI models introduces risks of opaque automated decision-making, potential processing beyond original purpose, and exposure to sub-processors who may have different data governance standards.
CA-P-001434 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
high Ai automated
Miro · Miro Privacy Policy
Users who place sensitive business, legal, HR, or personal information on Miro boards may not realise this content could be used to train AI models, which raises significant confidentiality and data protection risks.
CA-P-004979 First tracked May 7, 2026 Last seen May 7, 2026 Compare across platforms →
Character.AI · Character.ai Terms of Service
This risk transfer is especially significant given that users — including minors — may interact with AI characters that produce harmful, emotionally manipulative, or dangerous content, yet the company accepts no liability for any of it.
CA-P-005720 First tracked May 7, 2026 Last seen May 7, 2026 Compare across platforms →
high Liability limitation
Microsoft Copilot · Microsoft Copilot Terms of Service
This disclaimer shifts virtually all risk of AI-generated misinformation, harmful advice, or offensive output from Microsoft to the user, which is particularly significant as Copilot is marketed for professional and productivity use cases.
CA-P-002080 First tracked Apr 4, 2026 Last seen Apr 9, 2026 Compare across platforms →
high Ai automated
Character.AI · Character.ai Terms of Service
Your conversations with AI characters, including what the AI says to you, fall under a perpetual commercial license that Character.AI can use to promote the service or share with third parties, even though the agreement states you own this content.
CA-P-008831 First tracked May 10, 2026 Last seen May 12, 2026 Compare across platforms →
high Liability limitation
Public.com · Public.com Terms of Service
AI-driven investing tools that generate portfolio recommendations may appear to offer professional investment guidance, but the disclaimer removes all legal accountability from Public.com if users suffer financial harm by following those suggestions.
CA-P-006471 First tracked May 8, 2026 Last seen May 8, 2026 Compare across platforms →
high Ai automated
GitHub · GitHub Privacy Statement
This is a default opt-in practice, meaning your data is used for AI training automatically unless you take action to opt out, which many users may not know to do.
CA-P-005601 First tracked May 7, 2026 Last seen May 7, 2026 Compare across platforms →
Character.AI · Character.ai Privacy Policy
Your private conversations with AI characters can become part of the training data that shapes the AI system itself, with limited ability to prevent this after the fact.
CA-P-000784 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
high Data usage
GitHub · GitHub Privacy Statement
Developers storing code on GitHub — including potentially proprietary or sensitive code — should be aware their contributions and behavior may feed into commercial AI products.
CA-P-001343 First tracked Apr 3, 2026 Last seen Apr 10, 2026 Compare across platforms →
high Acceptable use
Unreal Engine · Unreal Engine EULA
This is a significant and non-standard restriction that directly prohibits a growing use case in the tech industry — using game engine assets or renders to train AI systems — and breach could expose organizations to contract termination and damages claims.
CA-P-006175 First tracked May 8, 2026 Last seen May 8, 2026 Compare across platforms →
high Content moderation
Canva · Canva Privacy Policy
Your creative work uploaded to Canva — including personal images and design content — could be used to train commercial AI systems, which raises questions about intellectual property and consent.
CA-P-005242 First tracked May 7, 2026 Last seen May 7, 2026 Compare across platforms →
Figma · Figma Privacy Policy
This provision means that proprietary designs, client work, brand assets, or confidential prototypes you store in Figma could be used to improve Figma's AI products, potentially beyond what users and enterprise customers expect when they sign up for a design tool.
CA-P-001107 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
PayPal · PayPal Privacy Statement
Automated decisions can affect whether you can access your account, obtain credit, or use PayPal services, and under GDPR users have specific rights to challenge these decisions and request human review — rights that are less clearly defined for US users.
CA-P-002717 First tracked Apr 18, 2026 Last seen Apr 18, 2026 Compare across platforms →
Glean · Glean Privacy Policy
Using personal and proprietary workplace data to train AI models raises significant GDPR purpose limitation concerns and may not align with employees' reasonable expectations about how their work data is used.
CA-P-004383 First tracked Apr 30, 2026 Last seen Apr 30, 2026 Compare across platforms →
Glean · Glean Privacy Policy
Using customer workplace data for AI model training raises significant questions about data purpose limitation and confidentiality of enterprise information, particularly where employees discuss sensitive business matters through Glean.
CA-P-007453 First tracked May 9, 2026 Last seen May 12, 2026 Compare across platforms →
OpenAI · OpenAI Terms of Use
Your private conversations with ChatGPT or other OpenAI tools may be used to train future AI systems, meaning sensitive information you share — health questions, legal issues, personal problems — could potentially influence model outputs for other users.
CA-P-000072 First tracked Apr 3, 2026 Last seen Apr 3, 2026 Compare across platforms →
high Data usage
Anthropic · Claude.ai Terms of Service
Your private conversations with Claude could be used to improve Anthropic's AI systems unless you actively disable this in your account settings.
CA-P-000094 First tracked Apr 3, 2026 Last seen Apr 17, 2026 Compare across platforms →
high Data usage
Google Gemini · Gemini Apps Privacy Notice
Personal information embedded in AI prompts — including names, health details, financial situations, or relationship issues — becomes part of the training dataset that improves Google's commercial AI products, raising questions about whether users genuinely understand or consent to this use.
CA-P-001607 First tracked Apr 3, 2026 Last seen Apr 9, 2026 Compare across platforms →
high Ai automated
Copy.ai · Copy.ai Privacy Policy
Business users may be inputting proprietary strategies, customer data, or confidential information into Copy.ai workflows — this clause means that content could influence AI model behavior accessible to other users.
CA-P-004316 First tracked Apr 30, 2026 Last seen Apr 30, 2026 Compare across platforms →
high Data usage
OpenAI · OpenAI Privacy Policy
Your private conversations — including anything personal or sensitive you share — may become training data for OpenAI's AI models unless you actively opt out.
CA-P-002002 First tracked Apr 4, 2026 Last seen Apr 4, 2026 Compare across platforms →

Professional Governance Intelligence

Monitor specific governance provisions across platforms.

Professional includes provision-level monitoring, regulatory mapping, and audit-ready analysis.

Start free Start Professional free trial