Grindr
· Grindr Terms of Service
Grindr relies on self-attestation rather than independent age verification, meaning minors may access the platform despite the prohibition — a safety risk given the platform's …
Genetic testing of minors raises significant ethical and privacy concerns, as DNA results are permanent and irrevocable, and children cannot meaningfully consent to having their …
Minors using AI systems may share sensitive personal information without fully understanding data retention and training implications; the policy's reliance on age-gating without robust verification …
If a minor uses OpenAI services without proper parental consent, both the minor and the account holder may be in violation of the Terms — …
Children under 13 are legally prohibited from using OpenAI services, and teens between 13 and 17 should only use them with parental permission — failure …
Snapchat's age verification relies on self-reporting, and COPPA requires verifiable parental consent for children under 13 — weak enforcement of this threshold has been a …
TikTok
· TikTok Terms of Service
Parental consent obligations for under-18 users are enforceable and place legal responsibility on parents — but TikTok's ability to enforce age verification is limited, creating …
Google
· Google Terms of Service
Parents who create or authorize Google accounts for their children become legally responsible for their children's activity and any violations of Google's Terms, including content …
The platform hosts AI characters that can engage in a wide range of conversations, and access by minors raises significant safety concerns that the Terms …
Human oversight is a critical safeguard against AI errors causing serious harm, particularly in healthcare, criminal justice, and financial decisions where automated errors can have …
PayPal
· PayPal Privacy Statement
Automated decisions can affect your account access, transactions, and financial opportunities without human review, and your personal data is being used to train AI systems …
Microsoft
· Microsoft Privacy Statement (Legacy)
AI prompts can contain sensitive personal, professional, or confidential information, and users may not realize this content is stored, reviewed by humans, and used to …
This means your professional history, content, and behavior on LinkedIn may permanently contribute to AI systems, and the policy does not specify data minimization or …
This provision means your professional history, opinions, posts, and behavioral data may be permanently incorporated into AI models, with real implications for how your data …
This license is broad and perpetual, meaning LinkedIn can use your professional content, name, image, and likeness to train AI models even after you delete …
Your personal browsing habits, pins, and interactions could be used to train AI systems, which raises concerns about consent and the long-term use of your …
Your files and behavior on Dropbox could be used to develop AI systems, raising questions about consent, data minimization, and the scope of use beyond …
Using customer financial and behavioral data to train AI systems without a clear opt-out is an emerging area of regulatory concern and may constitute secondary …
Amazon
· Amazon Privacy Notice
Using consumer data to train AI models is an emerging area of regulatory scrutiny globally, and consumers generally do not expect their personal interactions to …
Roblox
· Roblox Privacy and Cookie Policy
Your messages, audio, and other content on Roblox may be used to train artificial intelligence systems, which is a significant secondary use of your personal …
Your professional data and behavior on LinkedIn may be used to build AI systems that affect how content, jobs, and people are ranked and recommended …
AI bias in Microsoft products used for hiring, lending, healthcare, or law enforcement can cause material harm to protected groups, and this commitment signals Microsoft's …
AI bias in consequential decisions — such as hiring, lending, or healthcare — can cause real harm, and this commitment is important, but it is …
Strava
· Strava Privacy Policy
The use of sensitive health and location data to train and run AI models introduces risks of opaque automated decision-making, potential processing beyond original purpose, …
This disclaimer shifts virtually all risk of AI-generated misinformation, harmful advice, or offensive output from Microsoft to the user, which is particularly significant as Copilot …
Your private conversations with AI characters can become part of the training data that shapes the AI system itself, with limited ability to prevent this …
GitHub
· GitHub Privacy Statement
Developers storing code on GitHub — including potentially proprietary or sensitive code — should be aware their contributions and behavior may feed into commercial AI …
Figma
· Figma Privacy Policy
This provision means that proprietary designs, client work, brand assets, or confidential prototypes you store in Figma could be used to improve Figma's AI products, …
Your private conversations with ChatGPT or other OpenAI tools may be used to train future AI systems, meaning sensitive information you share — health questions, …
OpenAI
· OpenAI Privacy Policy
Your private conversations — including anything personal or sensitive you share — may become training data for OpenAI's AI models unless you actively opt out.