The 16-year minimum for EU/EEA and UK users reflects stricter regulatory requirements under GDPR and UK GDPR for platforms processing children's data, and non-compliance with these restrictions creates significant regulatory risk.
Parents should be aware that children aged 13 and above can use Mistral AI, and that by default their conversations may be used for AI training unless opted out — this has particular implications for children's personal data.
This is a legally required prohibition with federal criminal law backing — violations are not just a policy matter but a federal crime, and OpenAI is legally required to report known CSAM to the National Center for Missing and Exploited Children.
Minors using AI systems may share sensitive personal information without fully understanding data retention and training implications; the policy's reliance on age-gating without robust verification creates compliance risk under COPPA and the EU's GDPR Art. 8.
The minimum age of 13 aligns with COPPA in the US, but OpenAI places the enforcement burden on users and parents rather than implementing robust age verification, creating risk that minors access services without consent.
Despite this restriction, Character.AI has faced significant legal and regulatory scrutiny for inadequate enforcement of age gates, meaning minors may access AI content including potentially harmful AI-generated conversations.
If a minor uses OpenAI services without proper parental consent, both the minor and the account holder may be in violation of the Terms — and OpenAI may collect or process that minor's data without the legally required parental consent.
Provisions affecting minors carry heightened regulatory significance under COPPA and GDPR-K, and determine what protections apply to younger players interacting with platform content, community features, and creator tools.
Snapchat's age verification relies on self-reporting, and COPPA requires verifiable parental consent for children under 13 — weak enforcement of this threshold has been a significant regulatory concern for Snap.
Children under 13 are legally prohibited from using OpenAI services, and teens between 13 and 17 should only use them with parental permission — failure to enforce this creates significant legal risk for both OpenAI and parents who allow unsupervised use.
Given Minecraft's predominantly young user base, creators, modders, and server operators who collect data or create content involving children face significant legal obligations under COPPA and equivalent international laws.
TikTok
· TikTok Terms of Service
Parental consent obligations for under-18 users are enforceable and place legal responsibility on parents — but TikTok's ability to enforce age verification is limited, creating compliance risk under COPPA.
Riot's age verification is self-reported by minors and parents, meaning children may easily access games and make in-game purchases without genuine parental consent or financial oversight.
Pika
· Pika Terms of Service
Allowing 13-year-olds to use a generative AI platform that collects voice and likeness data raises significant COPPA compliance obligations, and the AI Self age restriction of 18+ reflects the biometric and financial sensitivity of that feature.
The platform hosts AI characters that can engage in a wide range of conversations, and access by minors raises significant safety concerns that the Terms attempt to address through age gating.
Roblox
· Roblox Privacy Policy
This provision directly governs how Roblox monetizes its large minor user base through advertising and establishes the age thresholds at which different advertising practices apply.
These requirements protect against minors accessing adult content and protect against identity fraud, but they also mean Creators must submit sensitive personal identity documents to the platform.
Your data powers Supabase's business analytics and product improvements, and once anonymized and aggregated, Supabase treats that derived data as its own property with no restrictions on how it can be used or shared.
The agreement caps Snowflake's total financial exposure to the prior year of fees paid, regardless of the scale of data loss, service outage, or other harm; organizations with high-value or sensitive data stored on the platform should assess whether this cap is proportionate to their risk.
For developers or businesses that rely on AI21's API for production systems, a liability cap of $100 or twelve months of fees may be far below the actual cost of a service failure, data incident, or harmful output.
This clause limits X's total financial exposure to each user to $100 for free users, regardless of the nature or scale of harm claimed, which is a standard but significant limitation on consumer remedies.
For free-tier users or anyone who has paid less than $100, this clause effectively eliminates any meaningful financial recovery for harms caused by Tabnine, including IP infringement in AI-generated code or data breaches.
For the vast majority of Substack users who use the platform for free, the maximum compensation they could ever receive from Substack for any harm — including data breaches, wrongful account termination, or content loss — is $100.
This cap means that even if X's actions cause significant harm to you, such as wrongful account termination, data breaches, or loss of business, your legal recovery is effectively limited to $100 in most cases, which is far below the cost of litigation.
This carve-out means that deletion of your DNA data is not complete erasure — your genetic information may persist in research databases in aggregated form. This has particular significance for users who later change their mind about research participation.
Human oversight is a critical safeguard against AI errors causing serious harm, particularly in healthcare, criminal justice, and financial decisions where automated errors can have life-altering consequences.
If an AI feature places an incorrect or unwanted order, the terms disclaim Instacart's liability and place responsibility on the user to catch errors before finalization, which may limit your ability to get a refund or remedy for AI-caused mistakes.
PayPal
· PayPal Privacy Statement
Automated decisions can affect your account access, transactions, and financial opportunities without human review, and your personal data is being used to train AI systems that may serve PayPal's broader commercial interests.
As a major provider of legal, tax, and risk intelligence products powered by AI, Thomson Reuters' automated processing could influence significant professional and legal outcomes, making transparency about how these systems work critically important.
Microsoft
· Microsoft Privacy Statement (Legacy)
AI prompts can contain sensitive personal, professional, or confidential information, and users may not realize this content is stored, reviewed by humans, and used to improve Microsoft's products.