Zoom
· Zoom Terms of Service
This clause means your meeting conversations, shared files, and other content may be used to train AI systems, which raises significant privacy concerns especially in sensitive professional or personal contexts.
Zoom
· Zoom Terms of Service
The agreement authorizes use of meeting and communication content, which may include audio, video, chat transcripts, and shared files, to develop and improve AI features, subject to consent and available opt-out mechanisms.
AI bias in Microsoft products used for hiring, lending, healthcare, or law enforcement can cause material harm to protected groups, and this commitment signals Microsoft's recognition of that risk — though it does not provide consumers with a direct remedy.
AI bias in consequential decisions — such as hiring, lending, or healthcare — can cause real harm, and this commitment is important, but it is a voluntary pledge without a consumer complaint mechanism or independent enforcement.
Zoom
· Zoom Privacy Statement
Your private meeting conversations, voice recordings, and transcripts could be used to improve Zoom's AI products unless someone actively opts out on your behalf — most users will not know this is happening.
Figma
· Figma Privacy Policy
Design files submitted to Figma's AI features may contain proprietary business information, client work, or sensitive intellectual property, and this clause authorizes Figma to use that material to improve its AI unless users take affirmative steps to opt out.
Notion
· Notion Terms of Service
For anyone storing sensitive, confidential, or personal information in Notion, the AI terms are critical to understand before enabling AI features, as they may govern how that content is processed or used for model improvement.
Replit
· Replit Privacy Policy
Users who write proprietary, sensitive, or business-critical code on Replit should understand that their content may be used beyond their immediate project to improve Replit's AI systems, which could have implications for intellectual property and confidentiality.
Miro
· Miro Privacy Policy
AI features may involve additional data processing, including the use of board content to train or improve AI models, which raises distinct privacy considerations not covered by the main Privacy Policy.
Miro
· Miro Terms of Service
AI processing of board content by third-party AI providers creates significant data exposure risk — business strategies, personal data, and confidential information on Miro boards could be shared with external AI model operators.
Strava
· Strava Privacy Policy
The use of sensitive health and location data to train and run AI models introduces risks of opaque automated decision-making, potential processing beyond original purpose, and exposure to sub-processors who may have different data governance standards.
Miro
· Miro Privacy Policy
Users who place sensitive business, legal, HR, or personal information on Miro boards may not realise this content could be used to train AI models, which raises significant confidentiality and data protection risks.
This risk transfer is especially significant given that users — including minors — may interact with AI characters that produce harmful, emotionally manipulative, or dangerous content, yet the company accepts no liability for any of it.
This disclaimer shifts virtually all risk of AI-generated misinformation, harmful advice, or offensive output from Microsoft to the user, which is particularly significant as Copilot is marketed for professional and productivity use cases.
Your conversations with AI characters, including what the AI says to you, fall under a perpetual commercial license that Character.AI can use to promote the service or share with third parties, even though the agreement states you own this content.
AI-driven investing tools that generate portfolio recommendations may appear to offer professional investment guidance, but the disclaimer removes all legal accountability from Public.com if users suffer financial harm by following those suggestions.
GitHub
· GitHub Privacy Statement
This is a default opt-in practice, meaning your data is used for AI training automatically unless you take action to opt out, which many users may not know to do.
Your private conversations with AI characters can become part of the training data that shapes the AI system itself, with limited ability to prevent this after the fact.
GitHub
· GitHub Privacy Statement
Developers storing code on GitHub — including potentially proprietary or sensitive code — should be aware their contributions and behavior may feed into commercial AI products.
This is a significant and non-standard restriction that directly prohibits a growing use case in the tech industry — using game engine assets or renders to train AI systems — and breach could expose organizations to contract termination and damages claims.
Canva
· Canva Privacy Policy
Your creative work uploaded to Canva — including personal images and design content — could be used to train commercial AI systems, which raises questions about intellectual property and consent.
Figma
· Figma Privacy Policy
This provision means that proprietary designs, client work, brand assets, or confidential prototypes you store in Figma could be used to improve Figma's AI products, potentially beyond what users and enterprise customers expect when they sign up for a design tool.
PayPal
· PayPal Privacy Statement
Automated decisions can affect whether you can access your account, obtain credit, or use PayPal services, and under GDPR users have specific rights to challenge these decisions and request human review — rights that are less clearly defined for US users.
Glean
· Glean Privacy Policy
Using personal and proprietary workplace data to train AI models raises significant GDPR purpose limitation concerns and may not align with employees' reasonable expectations about how their work data is used.
Glean
· Glean Privacy Policy
Using customer workplace data for AI model training raises significant questions about data purpose limitation and confidentiality of enterprise information, particularly where employees discuss sensitive business matters through Glean.
Your private conversations with ChatGPT or other OpenAI tools may be used to train future AI systems, meaning sensitive information you share — health questions, legal issues, personal problems — could potentially influence model outputs for other users.
Your private conversations with Claude could be used to improve Anthropic's AI systems unless you actively disable this in your account settings.
Personal information embedded in AI prompts — including names, health details, financial situations, or relationship issues — becomes part of the training dataset that improves Google's commercial AI products, raising questions about whether users genuinely understand or consent to this use.
Business users may be inputting proprietary strategies, customer data, or confidential information into Copy.ai workflows — this clause means that content could influence AI model behavior accessible to other users.
OpenAI
· OpenAI Privacy Policy
Your private conversations — including anything personal or sensitive you share — may become training data for OpenAI's AI models unless you actively opt out.