PayPal
· PayPal Privacy Statement
Using customer data to train AI models raises questions about the scope of consent provided and how long such data is retained for training purposes, which may have implications under GDPR's purpose limitation and data minimization principles.
PayPal
· PayPal Privacy Statement
This provision discloses that personal information, including financial and transaction data, is used to train AI models, and that automated decision-making is applied to fraud and risk assessments that may have consequences for account access and service availability.
This provision states that content users enter into Windsurf, which may include proprietary code, sensitive queries, or personal information, can be retained and used to train the company's AI systems beyond the immediate session.
Your actual coding questions, code snippets, and AI conversations may become training data, potentially including sensitive or proprietary code you did not intend to share beyond the immediate session.
OpenAI
· OpenAI Privacy Policy
This is an opt-out rather than opt-in default, meaning your conversation content is used for model training unless you take active steps to stop it, which many users may not be aware of.
This provision means that creative inputs and outputs produced during your use of Stability AI tools may become part of the data used to improve the company's AI models, which raises questions about consent, data minimization, and the scope of use beyond the immediate service interaction.
This provision means that if an employee or end user relies on a Perplexity AI output that turns out to be incorrect, the enterprise customer has no warranty claim against Perplexity. The terms place the entire risk of AI output accuracy on the customer.
This clause means if Gemini gives you wrong medical, legal, or financial advice and you act on it, Google bears no responsibility for any harm you suffer.
If ChatGPT generates false information about a real person — including you — OpenAI disclaims liability, which raises serious concerns about AI-generated defamation, inaccurate financial or medical advice, and the right to correct AI-generated falsehoods.
Writer
· Writer Terms of Service
If you rely on Writer's AI output in a business decision, legal filing, medical recommendation, or any other consequential context and it turns out to be wrong, Writer bears no liability — you do.
Code you submit to Tabnine — including proprietary business logic or sensitive algorithms — may be used to train or improve Tabnine's AI models, which could raise IP confidentiality and trade secret concerns for enterprise users.
GitHub
· GitHub Privacy Statement
The policy authorizes use of user data for AI product development, which may include training or improving machine learning models; the full scope of this use is not entirely defined within this document and requires review of separate product terms.
Airbnb
· Airbnb Terms of Service
This clause significantly limits what you can recover from Airbnb if something goes wrong — such as a listing being materially different from its description, a safety incident at a property, or a host or guest causing damage — even if Airbnb's platform facilitated the harm.
Pika
· Pika Terms of Service
Holding users personally liable for autonomous AI-generated content they did not directly produce is an unusual and potentially unfair risk allocation that could expose users to liability for AI hallucinations, defamatory statements, or harmful content generated by their AI Self without their knowledge.
Pika
· Pika Terms of Service
If your AI Self generates harmful, defamatory, or misleading content in autonomous interactions with other users or on social media, these terms assert that you bear sole responsibility for those outputs, which could create unexpected legal exposure for the account holder.
AI systems used in commercial credit scoring, risk decisioning, and business intelligence can have material impacts on businesses and individuals, and a voluntary self-certification may not provide sufficient independent oversight or recourse.
This means conversations you have with Xbox or other Microsoft AI features are not private — they may be stored, reviewed by humans, and used to build Microsoft's AI products.
Your messages, creative work, and uploaded files may be used to train OpenAI's AI systems, potentially influencing future AI outputs and having your data persist in model weights even after you delete your account.
TikTok
· TikTok Terms of Service
Your creative content becomes training data for TikTok's AI products without any compensation, and this right is irrevocable — meaning even if you delete your content or account, TikTok may have already used it to train models that continue to benefit the company.
This license covers your voice recordings and video, which Descript may use to train AI models including voice cloning features, a use most users would not anticipate from an editing app.
Your personal content and professional information could be used to build AI systems, potentially without your active awareness, and this use extends to third-party AI tools LinkedIn works with.
This provision permanently transfers broad intellectual property rights over your uploaded content to Instacart for AI development purposes, with no compensation and no expiration, even if you later delete your account.
Your creative prompts and generated content may be used commercially by Leonardo.Ai to improve its product, potentially without meaningful compensation or notification each time this occurs.
Runway
· Runway Terms of Service
This provision means your creative prompts and AI-generated videos become permanent training data for Runway's AI, with no ability to withdraw consent even if you delete your account.
Your creative inputs, prompts, and generated images can be permanently incorporated into Ideogram's AI training datasets, meaning your content shapes future commercial AI products without compensation or ongoing control.
Pika
· Pika Terms of Service
Your voice recordings, facial likeness, and personal characteristics submitted to Pika can be used to train AI models — this type of biometric data collection carries significant privacy risks and is regulated by specific state laws with private rights of action.
Pika
· Pika Terms of Service
Your voice recordings and likeness data are biometric-adjacent personal information; their use for AI training and autonomous AI Self creation raises significant privacy concerns and may trigger specific legal protections depending on your state of residence.
Slack
· Slack Terms of Service
Confidential business communications, personal employee data, and sensitive files shared in Slack workspaces could be used by Slack to train AI models unless the organization takes affirmative steps to opt out.
Microsoft
· Microsoft Privacy Statement (Legacy)
As AI becomes central to Microsoft's products, the use of your personal conversations, documents, and inputs to train AI models represents a significant and growing use of personal data that many users may not anticipate.
Your personal data — including things you type, say, or create — may be used to improve Microsoft's AI systems, often without a clear opt-out mechanism surfaced at the point of collection.