Found in 26 of 170 platforms tracked (15% adoption) · 304 provisions
For businesses running critical infrastructure on AWS, an unannounced suspension could cause immediate and severe operational disruption with no guaranteed right of appeal or reinstatement.
Businesses hosting customer-facing applications on AWS can lose their entire cloud infrastructure if their end-users violate the AUP, even if the business itself was unaware of the violation.
Arbitration is typically faster but favors large corporations; it limits your ability to present your case publicly and restricts the remedies available to you compared to a court proceeding.
This prohibition extends to facilitating DDoS attacks, meaning businesses that provide services or tools that could be weaponized for network abuse — even unintentionally — are at risk of AUP violati…
This provision imposes an absolute prohibition with no safe harbor — any AWS-hosted platform that fails to prevent or promptly remove such content faces immediate account termination and mandatory la…
Interest-based advertising involves building detailed behavioral profiles from your activity across Amazon's ecosystem and potentially partner sites, and this data may be shared with Amazon's large a…
Security researchers and penetration testers must obtain explicit authorization before conducting any testing through AWS infrastructure, or they risk immediate account suspension and potential law e…
Voice recordings are a sensitive data category that can reveal personal conversations, health information, and household activity — and the involvement of human reviewers goes beyond what many consum…
Amazon has unilateral power to terminate your access to all Amazon services, including your purchase history, digital content library, and stored payment information, with limited recourse available …
If Amazon's error causes you significant financial, personal, or data-related harm beyond what you paid for services, you cannot recover those losses from Amazon, regardless of the severity of their …
Amazon operates numerous services that children commonly use — including Alexa, Amazon Kids, Prime Video, and Kindle — and the adequacy of age verification and parental consent mechanisms is a signif…
Precise location data is among the most sensitive personal information — it can reveal where you live, work, worship, receive medical care, and more — and Amazon's collection of this data across its …
Using consumer data to train AI models is an emerging area of regulatory scrutiny globally, and consumers generally do not expect their personal interactions to permanently inform commercial AI syste…
The acknowledgment that training dataset requests are 'complex' and may be declined signals that data deletion and correction rights are not fully enforceable in practice for data already incorporate…
People who have never signed up for Claude may have had their personal data — published online or sold by a data broker — incorporated into Claude's training dataset without their knowledge or consen…
This tiered compliance structure creates differential obligations for operators in regulated industries, meaning businesses in healthcare, law, and finance face significantly greater compliance burde…
The explicit prohibition extends to fictional and roleplay contexts, closing creative framing loopholes, and the mandatory reporting commitment creates real law enforcement consequences for violation…
The safety-review exception means your opt-out does not fully protect your conversations from being used in AI training, which is a meaningful limitation that may not be obvious to most users.
The explicit prohibition on AI impersonation of humans — and the inclusion of neural data alongside biometric data — reflects emerging regulatory standards and goes beyond most existing platform AUPs…
This is one of the few affirmative law enforcement reporting obligations explicitly stated in an AI platform AUP, meaning detection events trigger mandatory action rather than discretionary review.
The opt-out is not a complete opt-out: two broad exceptions—feedback interactions and safety flags—mean a significant portion of your conversations may still be used for AI training regardless of you…
Mandatory arbitration eliminates your right to sue Anthropic in court and prevents you from joining with other users in a class action, significantly reducing your practical ability to seek compensat…
Many consumers forget to cancel before renewal and lose a full billing period's fee with no refund recourse, which is a common and costly pattern with subscription services.
A simple thumbs up or down click triggers storage and unlimited use of the entire conversation, including potentially sensitive content you may have shared, with no compensation and as a waiver of an…
This cap severely limits your financial recovery from Anthropic even if you suffer significant harm, such as from a data breach, incorrect AI output that causes professional or financial damage, or a…
Health data is among the most sensitive categories of personal information, and its collection by a technology company creates significant privacy and regulatory risk, particularly regarding secondar…
For developers who rely on the App Store as their primary or sole distribution channel, unilateral removal without binding procedural protections or damages remedy can be commercially devastating and…
This provision protects children from behavioral tracking and targeted advertising within apps, addressing one of the most significant child safety risks in the digital ecosystem.
Device fingerprinting is a covert tracking technique that circumvents user privacy choices; Apple's explicit prohibition is one of the strongest anti-fingerprinting stances by any major platform and …
Permanent exclusion from the Developer Program represents the most severe commercial sanction Apple can impose, effectively ending a developer's ability to reach Apple's global user base — with no gu…
This clause removes your right to sue Apple in court or join other consumers in a class action, which is often the only practical way to pursue small individual claims against a large company.
Most consumers reasonably believe they 'own' content they purchase digitally, but this clause means Apple retains control over access to that content and can remove it if licensing agreements with ri…
Content you have paid to license could be removed from your library or become inaccessible if Apple exercises this right, and you would have no contractual claim to a refund or compensation.
Many consumers are unaware their subscriptions auto-renew, leading to unintended charges — and a 30-day email notice of a price increase means your cost can go up unless you actively cancel within th…
In-app purchases in apps and games can accumulate quickly and are charged to the payment method on the Apple ID — parents are fully liable for these charges and must proactively enable restrictions t…
If Apple closes your account — even by mistake or due to an automated content moderation error — you could permanently lose access to all content you have paid to license, including apps, movies, mus…
This requirement means that consumers often pay higher prices for digital goods purchased through iOS apps because developers must factor Apple's commission into their pricing, and developers cannot …
The 'good faith belief' standard for voluntary disclosure — beyond legally compelled process — means Coinbase can share your data with authorities proactively without a court order and without notify…
Biometric data is uniquely sensitive because it cannot be changed if compromised, and multiple US state laws impose strict requirements on how companies collect, store, and delete it.
Sharing your financial and behavioral data with advertising and analytics companies goes beyond what most consumers expect from a financial services provider and may constitute a 'sale' or 'sharing' …
As a regulated financial services company, Coinbase receives thousands of law enforcement requests annually and is legally required to respond to many, meaning your full financial and identity profil…
This volume of sensitive financial and identity data makes your Coinbase profile an extremely high-value target for data breaches, and Coinbase's collection of this data is largely non-negotiable if …
Staking commission reduces your effective annual yield and should be factored into any comparison of staking returns across different platforms or protocols.
The spread is invisible at first glance because it is embedded in the quoted price, meaning consumers may not realize they are paying it on top of the stated transaction fee.
Blockchain analytics firms can use your linked identity and wallet address to trace all your on-chain cryptocurrency transactions across the entire public blockchain, permanently associating your rea…
This is among the most sensitive combination of personal data that can be collected — government ID plus financial account information plus biometrics together enable identity theft, financial fraud,…
US privacy law provides weaker protections than GDPR for EU residents, and transferring your data to the US means it may be accessible to US government surveillance programs without the same legal sa…
Sharing cryptocurrency trading data and financial information with advertising networks goes beyond what most consumers expect from a financial services provider and may enable highly sensitive infer…
A $100 liability cap on a platform where users may hold tens of thousands of dollars in cryptocurrency is extraordinarily low and means Coinbase bears almost no financial responsibility for losses yo…
This provision gives Coinbase nearly unchecked power to cut off your access to your money and cryptocurrency with limited appeal rights, which could be devastating if triggered in error during a mark…
This is the highest-impact financial risk in the entire agreement: unlike money in an FDIC-insured bank, cryptocurrency held on Coinbase is not protected if Coinbase fails, as demonstrated by the Cel…
This clause strips you of your right to sue Coinbase in court and prevents you from joining other harmed customers in a class action — the primary mechanism through which consumers hold large compani…
Even data described as 'non-personally identifiable' can often be re-identified when combined with other data sources, and Google's broad sharing with advertising partners means your behavioral data …
This is the core commercial use of your data. It means a health search in Google Search, a video watched on YouTube, and a location visited can all be combined to infer sensitive characteristics and …
Precise and continuous location data is among the most sensitive personal information — it reveals your home, workplace, medical appointments, religious practices, political activities, and personal …
Voice data is biometric in nature and may be subject to heightened legal protections under state laws like Illinois BIPA, and users are often unaware of the extent to which voice recordings are retai…
Children's data receives special legal protections under COPPA and GDPR Article 8, and Google's compliance posture here is critical given the scale of children's use of YouTube, Google Search, and Go…
This license is extremely broad and covers virtually every type of content you create or share using Google's services, potentially allowing Google to use your creative work in ways you did not speci…
Users who believe deleting their account will remove Google's rights to their content may be mistaken — the license survives in certain circumstances, which has significant implications for personal …
For users who rely on Gmail, Google Drive, Google Workspace, or YouTube for their livelihood or essential communications, sudden account termination without meaningful due process poses a serious ris…
If Google loses your data, disrupts your business, or causes you financial harm through a service failure, you cannot recover meaningful compensation under these terms — an especially significant ris…
Parents who create or authorize Google accounts for their children become legally responsible for their children's activity and any violations of Google's Terms, including content their children uplo…
AI bias in hiring, lending, healthcare, or criminal justice can have life-altering consequences; this provision signals Google's awareness but does not specify how bias will be measured, audited, or …
This warning, buried within the privacy policy, signals that Gemini is not a secure or confidential communication channel — a fact that many users who share medical, financial, or professional inform…
An opt-out model for AI training data use means most users' conversations contribute to AI model development without their active knowledge or consent, which raises significant concerns under GDPR's …
Most users assume AI chat conversations are only processed by automated systems; learning that human reviewers can read their conversations — even in pseudonymised form — fundamentally changes the pr…
An 18-month retention window means your AI conversations — which may include sensitive personal, medical, or financial details — are stored far longer than most users would expect, and persist even a…
This clause prevents developers from using Google's own infrastructure and data to build a rival mapping service, effectively locking them into Google's ecosystem.
Businesses that have deeply integrated Google Maps APIs into their products face the risk of sudden service changes or terminations that could break their applications without meaningful advance noti…
This clause places direct responsibility on the developer-customer to obtain consent for real-time tracking use cases, which has significant privacy and regulatory implications.
The subjective standard of 'reasonably believes' gives Google broad discretion to suspend access immediately without prior warning, which can devastate businesses that depend on Maps APIs.
Users who land on this page and do not click through to their regional document have no information about the rules governing their subscription, meaning they are effectively agreeing blind to terms …
Relying on legitimate interests rather than consent for advertising profiling means Meta does not ask for your permission and you must actively object — a significant shift of burden onto the consume…
This license is extremely broad and survives in certain circumstances even after you delete content, meaning Meta can continue using your posts commercially if others have already reshared them.
For personal users, account suspension means immediate loss of access to photos, messages, groups, and social connections. For businesses relying on Facebook Pages or Marketplace, sudden suspension c…
This gives Facebook users a contractual right to have their data deleted from third-party apps, which is a meaningful consumer protection especially for apps they no longer use.
Meta's age restriction is a legal minimum required by COPPA, but the absence of robust age verification means children under 13 frequently access the platform — creating significant regulatory exposu…
This is one of the most direct protections for Facebook users — it means that in theory, apps using Facebook login cannot monetize your data by selling it to data brokers or advertisers.
Despite these stated restrictions, Meta has faced significant regulatory scrutiny and litigation regarding children's access to its platforms and the adequacy of age verification measures, creating o…
Once your data is shared with third-party advertisers and partners, Meta's privacy policy no longer governs how those third parties use your information — each partner's own privacy practices apply.
Biometric data is among the most sensitive personal data categories — it is permanent and cannot be changed if compromised — and its collection is subject to strict laws in Illinois, Texas, and Washi…
Meta's advertising model is built on extensive behavioral profiling — your posts, likes, location, browsing behavior, and off-Facebook activity all feed into ad targeting systems, representing one of…
This clause protects users from having their Facebook data turned into a surveillance instrument — for example, it prohibits building tools that track activists, journalists, or individuals based on …
International transfers of EU personal data to the US require specific legal mechanisms under GDPR, and following the Schrems II ruling, transfers without appropriate safeguards are unlawful — Meta h…
This provision requires genuine, informed consent from users before their Facebook data is used in new or unexpected ways, which is a meaningful protection against covert data repurposing.
This means virtually anyone who uses the internet may have a data profile at Meta without their knowledge or consent, which is used for advertising targeting.
Processing inferences about political views, religion, health, and sexual orientation constitutes special category data processing under GDPR Article 9, which requires explicit consent — a higher leg…
This gives Meta enormous unilateral power over developers — any business that has built a product dependent on Facebook's API could have its access cut off without recourse, which is an unusual and s…
This is a critical child safety provision — it means third-party apps cannot use Facebook's platform to collect data about young children, providing a layer of protection for minors using family-link…
This clause gives Microsoft near-unlimited discretion to terminate your account and cut off access to all services, stored data, purchased content, and subscriptions with limited notice and no requir…
This commitment is directly relevant to consumers subject to AI-driven decisions in high-stakes contexts like employment screening, credit, healthcare, or law enforcement, where automated decisions w…
AI bias in Microsoft products used for hiring, lending, healthcare, or law enforcement can cause material harm to protected groups, and this commitment signals Microsoft's recognition of that risk — …
Algorithmic discrimination is a growing enforcement priority for regulators; if Microsoft AI systems produce discriminatory outcomes in employment, credit, housing, or healthcare contexts, affected u…
This means data you generate in one Microsoft product can be used to train AI models or inform decisions in completely separate products, creating a comprehensive profile you may not be aware of.
For free service users, this cap means your maximum recovery is just $100 even if Microsoft causes significant harm such as a data breach, loss of important stored documents, or prolonged service out…
Without a properly set up supervised child account, children under 13 may be using Microsoft services — including Xbox, Outlook, and Bing — in violation of COPPA, potentially exposing their personal …
This license is broad, worldwide, transferable, and royalty-free, meaning Microsoft can use your personal content — including documents, photos, and emails stored in OneDrive or Outlook — across its …
Health data is among the most sensitive categories of personal information, and its collection by a technology company through non-medical products creates risks around re-identification, secondary u…
Sensitive or personal information shared with Copilot or spoken to Cortana may be retained by Microsoft and used for AI model training, creating privacy risks if confidential or sensitive content is …
If a child in your household uses Xbox, Minecraft, or a family Microsoft account, their gaming behaviour, location, voice chat, and usage data may be collected, and parents need to actively manage fa…
This age restriction is legally significant under COPPA and affects how families set up Microsoft accounts for children — if a child under 13 uses the services without proper parental consent setup, …
This clause eliminates your ability to participate in class action lawsuits against Microsoft, which are often the only practical way individuals can challenge large corporations over small but wides…
This disclaimer shifts virtually all risk of AI-generated misinformation, harmful advice, or offensive output from Microsoft to the user, which is particularly significant as Copilot is marketed for …
This clause gives Microsoft nearly unchecked power to terminate your access to services you may depend on — including email, cloud storage, and AI tools — without meaningful advance notice in many ci…
This clause removes your right to sue OpenAI in court and prevents you from joining with other affected users in a class action, which is often the only practical way to seek compensation for small b…
This is one of the few absolute, non-negotiable prohibitions in the policy — no operator or user exception applies, and violations will trigger law enforcement referral.
This is an absolute prohibition with no operator override, implicating federal criminal law and export controls — violations could expose users and operators to criminal prosecution, not just account…
This clause shifts significant compliance responsibility onto API operators — if a developer's platform enables a prohibited use, the operator bears legal and contractual liability, not just OpenAI.
Given that AI models can generate functional code, this prohibition addresses a specific and serious risk of OpenAI tools being weaponized for cybercrime — violations implicate federal computer fraud…
Sensitive personal information you share in conversations — including health questions, financial details, or private communications — could be used to shape future AI behavior.
The breadth of data collection — including conversation content, precise location, and third-party sourced data — creates a detailed profile of users that may be used for purposes beyond what users e…
Without robust age verification mechanisms, children under 13 may access ChatGPT and have their data collected in violation of COPPA, creating legal and safety risks.
Your personal conversations, creative work, and other inputs could be used to shape future AI model behavior at a massive scale, raising significant privacy and intellectual property concerns, partic…
Without a robust age verification system, minors may access the service in violation of these terms, exposing them to AI-generated content risks and creating COPPA liability for OpenAI if children's …
This clause places potentially unlimited financial liability on individual users for their use of ChatGPT, including covering OpenAI's attorneys' fees, which is an unusually broad indemnification obl…
While the ownership grant sounds user-friendly, the warranty that AI outputs don't infringe third-party rights places the copyright infringement risk squarely on users, despite the fact that the AI m…
If you suffer significant harm — financial loss, reputational damage, or disruption to your business — as a result of ChatGPT errors, hallucinations, or a service outage, OpenAI's legal exposure is c…
This is the single most significant data practice in the document — it means every conversation you have with ChatGPT may be used to make the AI better, including any personal or sensitive informatio…
If a child under 13 uses ChatGPT without authorization, their data may be collected in violation of COPPA, creating legal exposure for both OpenAI and any platform operator that facilitated access; p…
OpenAI retains essentially unilateral termination rights with minimal procedural constraints, which means paid subscribers can lose access to ChatGPT Plus or API services without a defined notice per…
Your private conversations — including anything personal or sensitive you share — may become training data for OpenAI's AI models unless you actively opt out.
Many users assume AI conversations are confidential, but human reviewers can access chat content — this is especially significant if you share health, financial, or personal information.
Minors using ChatGPT below the age threshold may have their data collected without COPPA-compliant parental consent, creating legal risk for OpenAI and privacy risk for children.
A fraud risk score assigned by Stripe could result in your payment being declined at any merchant using Stripe, without your knowledge of how the score was generated or the ability to contest it.
The scope of this indemnification is extremely broad, extending to Stripe's affiliates, processors, and agents, and covers claims arising from your general business operations — not just your use of …
For high-volume merchants paying substantial fees, three months of fees may still represent a tiny fraction of damages caused by service failures or wrongful fund withholding, making legal recovery l…
This clause means that unexpected spikes in customer disputes or chargebacks can result in immediate automatic debits from your Stripe balance, potentially driving your account negative and requiring…
For businesses that rely on Stripe as their primary payment processor, sudden termination without cause can immediately halt revenue collection and may coincide with funds being held in reserve, crea…
This clause gives Stripe near-unlimited discretion to freeze portions of your business revenue, potentially causing serious cash flow disruption, with limited ability to challenge their decision.
This classification determines whether Taskers receive employment protections (minimum wage, workers' compensation, benefits), and places all tax and compliance obligations on Taskers — but regulator…
Precise geolocation data is among the most sensitive personal data categories, and continuous location monitoring ('watchPosition') is typically used for real-time location tracking rather than simpl…
Transparency reports are a key accountability mechanism for large platforms; the completeness and accuracy of TikTok's transparency reporting is subject to DSA obligations and is a material factor in…
Minors on TikTok are entitled to heightened privacy and safety protections under COPPA and the EU's GDPR/DSA framework, but TikTok has faced repeated regulatory findings that its age verification and…
This monitoring goes far beyond standard analytics and constitutes collection of sensitive behavioral and physical data from users' devices, including potentially reading clipboard contents which may…
This is one of the broadest content licenses in consumer social media: it is irrevocable, royalty-free, and explicitly covers AI model training, meaning your content may permanently inform TikTok's A…
Mandatory arbitration strips users of the right to sue TikTok in court or join with other affected users in a class action, significantly limiting legal recourse — particularly for low-value individu…
Parental consent obligations for under-18 users are enforceable and place legal responsibility on parents — but TikTok's ability to enforce age verification is limited, creating compliance risk under…
The explicit naming of BD TikTok USA LLC as a data-sharing partner, alongside unnamed 'affiliates,' means your personal data flows to multiple corporate entities for commercial purposes, raising data…
This level of device-level monitoring goes far beyond what most users would expect from a policy page and may constitute covert collection of sensitive signals used for device fingerprinting and beha…
This shows TikTok has a consent management platform (CMP) in place, but the effectiveness and granularity of consent collection determines whether EU and UK users' data rights are genuinely protected.
Children and teenagers are among TikTok's largest user groups, and the adequacy of age verification and parental consent mechanisms directly affects compliance with COPPA, GDPR Art. 8, and the UK Age…
Cross-border data transfers to countries outside the EU/EEA — including transfers to ByteDance infrastructure — require specific legal safeguards under GDPR, and TikTok's ownership structure has made…
Data transfers to ByteDance-affiliated infrastructure raise significant concerns about Chinese government access to user data under China's National Intelligence Law, which compels Chinese companies …
Background check data includes criminal history — some of the most sensitive personal information that exists — and automated or semi-automated decisions based on this data can result in drivers bein…
Biometric data is uniquely sensitive — unlike a password, you cannot change your face — and collection of this data without explicit informed consent triggers strict legal obligations in multiple US …
Automated decisions can result in drivers losing access to their livelihood without transparent explanation or meaningful human review, which is both a significant economic risk for drivers and a leg…
This means your precise location history, trip data, and personal information can be provided to police or government agencies, sometimes without your knowledge and without a court order in certain c…
This means Uber may know your exact whereabouts at all times — not just during trips — which is a significant privacy intrusion that goes beyond what most drivers would expect.
Once your data is shared with third parties, Uber's privacy policy no longer governs how those parties use it — each third party has its own data practices, and you may have limited ability to contro…
Audio and video recordings of passengers and drivers inside vehicles implicate wiretapping and electronic surveillance laws in many states, particularly two-party consent states. Drivers may face leg…
Biometric data is uniquely sensitive because it cannot be changed if compromised. Laws like Illinois BIPA create per-violation statutory damages of up to $5,000, and several states require explicit w…
Telematics data is used to make decisions about your driver account, including potential deactivation. You have limited visibility into how these scores are calculated and limited ability to challeng…
Your personal and behavioral data as a driver is being shared with major social media advertising networks, which can use it to build detailed profiles about you and target you — or others — with adv…
As a financial platform holding sensitive investment and identity data, inadequate security could expose users to financial fraud, identity theft, and account compromise.
Violating U.S. export control laws is a federal offense carrying significant criminal and civil penalties; this clause shifts compliance responsibility to the individual user.
If a minor uses Amazon without proper parental consent, any purchases or agreements made may be voidable under contract law, and Amazon's collection of their data may implicate children's privacy law…
This tracking occurs across the internet wherever Amazon's advertising or analytics tools are present, not just on Amazon.com, building a comprehensive profile of your online behavior.
The right to delete is limited — Amazon retains data for legal, dispute resolution, and business purposes even after deletion requests, meaning some data may persist indefinitely.
Amazon's repeat-infringer policy means your account — and all associated digital purchases — can be terminated for copyright violations, even if some claims are disputed.
By applying Washington law, Amazon may avoid stronger consumer protection statutes available to residents of California, New York, and other states.
Businesses using AWS Simple Email Service (SES) or other AWS communication services must maintain email sending hygiene and list management practices that comply with the AUP, or face account suspens…
AWS will remove infringing content and may suspend accounts under DMCA safe harbor procedures, but customers who knowingly facilitate IP infringement through their AWS-hosted platforms face independe…
This license never expires and allows Amazon to sublicense your content to third parties — meaning your review or photo could appear in Amazon advertising or be shared with partners without additiona…
AWS customers are bound by policy changes through continued use alone, with no requirement for direct notification — meaning significant new restrictions could take effect without the customer receiv…
Amazon's affiliate network is enormous — including AWS, Whole Foods, Ring, Twitch, MGM, Zappos, and many more — meaning your data may be used across a very broad corporate ecosystem, and in a corpora…
Active monitoring of user inputs by a dedicated team means your interactions with Claude are not private from Anthropic, and outputs can be silently modified without user notification — two practices…
This carve-out means the Universal Usage Standards are not truly universal — government customers may be permitted to use Claude for activities that would result in suspension or termination for priv…
Marketing and behavioral tracking cookies extend data collection beyond your Claude conversations to include browsing patterns and cross-site activity, which may be shared with advertising partners.
Knowing that deleted conversations take up to 30 days to be fully removed from Anthropic's systems is important for users who share sensitive personal information with Claude and want it erased promp…
Agentic AI guidelines represent a forward-looking regulatory posture that directly anticipates EU AI Act autonomous system requirements and reflects the novel risks of AI systems that act in the worl…
Users may not realize that a simple thumbs up or down triggers full retention of an entire conversation, including any sensitive personal information shared earlier in that same chat.
Many users may not realize that when using Claude through an employer or third-party app, this Privacy Policy does not protect them, and their data rights are governed by a different entity's policie…
The prohibition on psychological manipulation directly mirrors EU AI Act Art. 5(1)(a) prohibited practices for subliminal manipulation — making this provision one of the strongest alignment points be…
As AI becomes a significant tool for political communication and persuasion, this provision establishes clear boundaries against AI-enabled election interference — a growing area of regulatory focus …
The 18+ minimum age requirement is higher than the COPPA threshold of 13, which means Anthropic has chosen a more protective approach to age gating, but also means teenagers who circumvent the age re…
This clause protects Anthropic from securities law liability but also means you have no recourse against Anthropic if Claude provides flawed investment-related information that causes you financial l…
Continued use of the service after a change constitutes acceptance, meaning you could be bound by materially worse terms—such as expanded data use or new arbitration conditions—simply by not cancelli…
Employees using work email addresses may unknowingly expose personal or sensitive Claude conversations to their employer, as Anthropic can link individual accounts to enterprise accounts with employe…
As AI-generated media becomes increasingly realistic, this requirement establishes a minimum disclosure standard protecting consumers from being deceived by synthetic content depicting real individua…
Subscription dark patterns — where users are charged without clear advance notice — are a leading source of consumer complaints; Apple's disclosure requirement is a meaningful protection against surp…
Financial transaction data is highly sensitive and its collection — combined with location data — creates a detailed profile of your purchasing behavior that could be used for profiling, targeting, o…
Children's data requires heightened protection under US and international law, and the proliferation of Apple devices among minors creates significant compliance and safety obligations that consumers…
Privacy nutrition labels give consumers standardized, comparable information about data collection before they install an app, enabling more informed consent decisions — a significant consumer protec…
When Apple shares your data with third-party app developers and partners, those parties' use of your data is governed by their own policies, not Apple's, meaning Apple's privacy commitments do not ex…
If Apple's services fail, cause you losses, or don't work as expected, Apple has legally disclaimed most responsibility — making it very difficult to recover damages beyond the amount you paid for th…
Voice data can inadvertently capture sensitive conversations, third-party speech, and private information; its retention by a technology company for product improvement purposes raises significant pr…
Understanding the breadth of data Apple collects helps consumers assess their privacy exposure across the entire Apple ecosystem, which spans devices, apps, payments, and cloud services.
Users who choose convenient payment methods like debit cards may be paying substantially more per transaction without fully understanding the cost differential, which can significantly erode returns …
Your financial behavior and cryptocurrency investment patterns may be used to build advertising profiles and target you with offers, and you may not realize this is happening unless you actively revi…
Users subscribing to Coinbase One to avoid fees should understand that the spread (up to 2%) still applies even for Coinbase One members, meaning 'zero fees' does not mean zero cost.
Coinbase can change fees, data practices, arbitration terms, or other material provisions with only email notice — and your continued use of the platform is treated as acceptance even if you didn't r…
You are required to share highly sensitive personal and financial information including your SSN with Coinbase as a condition of service, and this data is subject to their privacy practices and poten…
The spread is effectively a hidden markup on your cryptocurrency purchase or sale price that is separate from the disclosed transaction fee, meaning the total cost of trading on Coinbase may be signi…
Cryptocurrency taxation is complex (every trade is potentially a taxable event), and Coinbase's reporting to the IRS means the agency already has your transaction data — meaning failure to properly r…
California's CPRA grants some of the strongest consumer privacy rights in the US, including the right to limit how companies use sensitive data like your government ID and financial information, but …
Even if you close your Coinbase account and request data deletion, your identity and transaction records will be retained for years due to AML and financial reporting obligations, meaning exercising …
Combined with your identity and financial data, behavioral tracking data creates a comprehensive profile that can reveal your trading strategies, financial risk tolerance, and investment patterns.
The preview screen is Coinbase's primary mechanism for satisfying its fee disclosure obligations; once you confirm past this screen, you have legally accepted all fees shown, making it essential to a…
Network fees are variable and can be unexpectedly high during periods of blockchain congestion, and users should check the estimated network fee before initiating any crypto transfer.
Because the fee schedule can change without a specified minimum notice period, users who rely on current fees for investment planning may find their cost structure changed without sufficient advance …
These rights are legally enforceable and allow you to find out exactly what data Coinbase holds about you, correct errors, or request deletion — important tools given the sensitivity of the financial…
GDPR gives EU residents the right to have their personal data erased, but blockchain transactions are permanent and publicly visible, creating a fundamental conflict between your legal rights and the…
Violating these conduct rules — even through common practices like web scraping, using unofficial API clients, or automated data collection — can result in immediate account termination, and Google's…
For EU and UK users, transfers of personal data to the United States require specific legal mechanisms (SCCs, adequacy decisions) following the Schrems II ruling, and Google's use of global server in…
This is the only provision that directly references user data controls and consent, but it is framed as an aspiration for AI product design rather than a binding obligation specifying what data is co…
These rights are legally required under GDPR (Arts. 15-20) and CCPA, and Google's implementation of them through self-service tools means consumers can actually exercise meaningful control over their…
Without specified safety testing standards, audit rights, or public disclosure of safety test results, this commitment cannot be independently verified by consumers, regulators, or enterprise custome…
The absence of clear, uniform retention periods means users cannot know with certainty when their data will be permanently deleted, and Google's broad 'legitimate business purposes' exception allows …
This is the closest this document comes to granting users a right of recourse against AI decisions, but it is framed as a design aspiration rather than a legally enforceable right.
Google's 'as is' disclaimer means you have no contractual right to a working, reliable service — if Gmail goes down and you lose important emails or business operations are disrupted, Google has no l…
This self-acknowledged ambiguity creates significant governance risk: Google concedes that the boundary between permitted military AI work and prohibited weapons-adjacent AI is unclear, leaving subst…
The right to delete your data is meaningful only if deletion is complete and prompt — the delay and safety-review exception create ambiguity about how long your data truly persists after you request …
Extensions dramatically expand the data sharing surface beyond Google, creating a chain of data processors whose privacy practices users are expected to independently evaluate — a significant consume…
While Google provides stronger data use protections for minors and students, data is still collected and retained, and the onus is on school administrators to configure appropriate settings — creatin…
This restriction ensures all Maps data must flow through Google's live APIs, preventing developers from building offline-capable or independent datasets from Google's mapping data.
This restricts developers from displaying Google Maps data in any interface that does not use Google's Maps framework, preventing integration of Google map data into alternative visualizations or mix…
End users of developer applications are indirectly bound by Google's Maps Platform terms through the developer's required pass-through, extending Google's legal framework into consumer-facing applica…
These rights are legally mandated under GDPR and CCPA, and knowing how to exercise them gives you meaningful control over your personal data held by one of the world's largest data controllers.
Most consumers only read the main Terms of Service, but Meta's most impactful provisions regarding data collection, behavioral advertising, and content moderation are contained in separate documents …
These obligations are enforced through Meta's automated and human content moderation systems, and violations can result in immediate account suspension without the user having a formal appeals proces…
The retention period following account deletion is not specifically defined, and the broadly worded exceptions — including 'protect ourselves in legal disputes' — could permit Meta to retain your per…
The right to know you are interacting with AI — and to understand how it makes decisions affecting you — is increasingly recognized as a fundamental requirement by regulators globally, and this commi…
The existence of a published standard and impact assessment process means Microsoft has created a benchmark against which its own AI products can be evaluated — and against which regulators or plaint…
Consumers in states with stronger consumer protection laws than Washington — such as California, New York, or Illinois — may lose access to state-law protections that would otherwise apply to their c…
Users who forget to cancel before the renewal date will be automatically charged for another full subscription period, and the agreement authorizes Microsoft to store and automatically charge your pa…
AI system failures in safety-critical applications — healthcare, transportation, public safety — can cause physical harm; this commitment, without specified testing standards or third-party safety ce…
This provision means Microsoft can materially alter your rights, obligations, and Microsoft's data practices at any time with only notification as a precondition — your continued use of any Microsoft…
As Microsoft AI systems process increasing volumes of personal data to power tools like Copilot and Azure AI, this commitment determines the baseline privacy protections consumers can expect — and si…
The existence of named governance bodies creates an accountability structure that regulators and the public can reference — and their effectiveness (or lack thereof) will determine whether Microsoft'…
Following the Schrems II ruling (CJEU 2020), the legal validity of data transfers to the US depends on supplementary measures alongside SCCs; while the EU-US Data Privacy Framework (2023) now provide…
This disclaimer means Microsoft bears no legal responsibility if a service goes down, loses your data, contains security vulnerabilities that expose your information, or fails to perform as advertise…
These stated principles may establish a standard of care against which Microsoft's actual AI product behavior can be measured by regulators and courts.
This commitment, if not operationalised in actual product design, could be characterised as a deceptive practice by regulators, and does not specify what data Microsoft collects through its AI system…
Transparency is a core requirement under GDPR Art. 13 and 14 (information to be provided to data subjects) and EU AI Act Art. 13, but this commitment does not specify what disclosures are actually ma…
Internal governance structures are increasingly required by law under the EU AI Act, but this page does not describe external audit rights, third-party verification, or how affected individuals can t…
Even at the minimum 'Required' level, Microsoft collects device identifiers, error logs, and hardware information; the 'Optional' level significantly expands the scope of data collection to include d…
Even without sharing your name, the combination of search queries, location, device identifiers, and IP address shared with advertising partners can enable re-identification and detailed behavioural …
The Privacy Dashboard gives consumers a practical tool to exercise their data rights, but the scope of what can be deleted is limited — some data essential to service delivery or required by law cann…
Many consumers are unaware that subscriptions automatically renew, resulting in unexpected charges, and the burden is on the consumer to cancel before the renewal date rather than on Microsoft to obt…
This provision gives Microsoft broad discretion to determine what constitutes 'harmful' or 'misleading' AI-generated content and to terminate accounts accordingly, with limited transparency about how…
This clause means Microsoft can materially change the conditions of your service — including data use policies, arbitration terms, and content licensing — without requiring your affirmative consent, …
These rights are legally enforceable in California, the EU, and many other states, meaning OpenAI must respond to your requests within legally mandated timeframes or face regulatory consequences.
This clause is unusual in the industry and reflects OpenAI's specific safety philosophy — it creates enforceable obligations around AI governance and model safety that go beyond typical content moder…
A change of ownership could mean your personal data ends up with a company with very different privacy practices, and you may have limited ability to prevent this.
Data about you can be collected by OpenAI even if you never directly use its services — for example, if another user pastes content mentioning you into ChatGPT.
These rights are legally enforceable in the EU, UK, and California, and OpenAI has provided specific channels to exercise them — making it important for users to know these options exist.
Your data may flow to companies you have no direct relationship with, and the policy does not enumerate specific recipients — making it difficult to assess the full scope of data sharing.
Violating these prohibitions can result in immediate account suspension or termination without notice, and OpenAI retains broad discretion to determine what constitutes a violation, meaning users ris…
This provision creates a two-tier system where EU residents receive stronger legal protections under GDPR-compliant terms, while consumers in other regions (including the US, UK, and rest of world) a…
Incorporating multiple separate policy documents by reference means the full scope of your legal obligations and OpenAI's data rights are spread across at least four distinct documents, making it pra…
This means OpenAI can significantly alter what rights you have, what data they collect, or what you're allowed to do, and simply continuing to use ChatGPT counts as your legal agreement — you must ac…
While OpenAI gives you rights to the content it generates for you, the legal status of AI-generated content remains uncertain — copyright offices in multiple jurisdictions have held that AI-generated…
Material changes to data practices could occur with only a website date-change notice, meaning many users will not be aware of significant shifts in how their data is collected or used.
The disclaimer that security cannot be guaranteed means that in the event of a data breach, OpenAI's liability may be limited, and sensitive conversation data could be exposed.
A new owner acquiring OpenAI could have different privacy practices or business models, and your data would automatically transfer without your explicit consent.
Sharing data with marketing and analytics vendors creates risk that your data will be used for targeted advertising or profiling beyond what you agreed to when you signed up for ChatGPT.
With AI-generated content increasingly implicated in election interference, this clause reflects OpenAI's attempt to limit liability and comply with emerging electoral integrity regulations globally.
For businesses and developers relying on API access, unilateral termination without detailed advance notice or procedural guarantees creates significant business continuity risk.
Your rights and obligations may differ significantly depending on which Revolut products you use, and you need to read multiple documents to understand the full terms of your relationship with Revolu…
These agreements are the legally binding contracts between Salesforce and its business customers — they define what Salesforce must deliver, what customers can do with the software, and what happens …
Even if you request deletion of your data, Stripe may retain it for extended periods to comply with financial regulations, meaning your right to erasure is limited in the financial services context.
Knowing and exercising these rights is the primary mechanism consumers have to control how Stripe uses their data, including requesting deletion or objecting to profiling for fraud prevention purpose…
The license to use data for product 'improvement and development' is broader than simply providing services and could encompass training machine learning models, benchmarking, or product analytics us…
Millions of people who have never signed up for Stripe may have their data collected and profiled simply by checking out on a merchant website, without realizing Stripe is involved.
This clause allows Stripe to materially alter the terms of your contract — including fee structures, reserve policies, and acceptable use restrictions — without your explicit consent, simply by notif…
Your financial data, identity information, and behavioral signals are shared across a wide ecosystem of third parties, many of whom you have no direct relationship with and whose own privacy practice…
Your personal data flows to Stripe's corporate affiliates and numerous third-party vendors, each of which has its own data practices and security posture, creating a broad data exposure footprint.
When Stripe acts as a processor, your privacy rights are primarily directed at the merchant, not Stripe — which can make it harder for consumers to know who is responsible for their data and where to…
EU and UK users' data is transferred internationally and protected by contractual safeguards, but those safeguards have faced legal challenges and may be subject to US government surveillance laws.
Creators and businesses who have built brand identity around a TikTok username face the risk of losing it permanently if they take an extended break from the platform — with no compensation or right …
Virtual currency purchases on TikTok involve real money but carry no guarantee of refund, transferability, or real-world value, creating financial risk — particularly for minors who may make unauthor…
Account suspension without adequate notice, appeal rights, or transparency about decision-making criteria can deprive users — including creators who earn income on the platform — of access to their c…
Battery status information, while seemingly innocuous, has been identified by privacy researchers and regulators as a device fingerprinting vector that can be used to track users across websites even…
Algorithmic recommendation systems on TikTok have been linked to amplification of harmful content to vulnerable users including minors, and the lack of transparent disclosure of ranking signals makes…
The inclusion of 'privately posted' content and the broad 'for any reason' framing gives TikTok near-unlimited discretion to remove content, which may affect creators and businesses who rely on the p…
This system determines whether advertising and analytics companies receive your behavioral data based on consent signals — but the complexity of the consent framework means users may not fully unders…
The complexity and fragmentation of this consent architecture across multiple TikTok domains (tiktok.com, music.tiktok.com, business.tiktok.com) creates risk that consent signals may not be consisten…
Frequent telemetry reporting means TikTok collects near-real-time behavioral data from users, and the transparency of this collection depends entirely on whether it is adequately disclosed in TikTok'…
TikTok's unilateral content removal and account suspension powers affect creators' livelihoods and freedom of expression, particularly because the appeals process is internal with no independent exte…
Background check data is highly sensitive and its use to make eligibility determinations triggers specific legal protections under the Fair Credit Reporting Act, including the right to receive a copy…
This behavioral monitoring data is used to generate safety scores that can affect your standing on the platform, and may be shared with insurers, meaning how you drive could directly affect your insu…
Exercising your data rights is one of the most important tools you have to understand and control what Uber knows about you, but the effectiveness of these rights depends on Uber responding correctly…
Knowing and exercising these rights is the primary way drivers can control what data Uber holds about them, correct inaccuracies that could affect their account, and request deletion of data they no …
Your bank account number, earnings records, and tax identification information are among the most sensitive financial data that exists, and sharing this with third parties creates risk of financial f…
Without access to the actual privacy policy, consumers and compliance teams cannot assess AT&T's data collection, sharing, or consumer rights provisions.
Data subject rights are a cornerstone of modern privacy law, and knowing how to exercise them — and understanding Apple's identity verification requirement — is essential for consumers who want to ma…
This quality and accuracy standard protects consumers from downloading apps that do not perform as advertised, reducing the risk of deceptive app marketing and wasted purchases.
Without the actual agreement text, consumers and compliance teams cannot assess the terms governing Chase deposit accounts.
The policy's age restriction is self-enforced through user attestation, not technical verification, meaning minors who misrepresent their age can access Coinbase and their data will be collected unti…
Without access to the actual privacy policy, consumers and compliance teams cannot assess how Credit Karma collects, uses, or shares personal and financial data.
This provision implies Google will publish research and engage with independent researchers, which is important for external AI safety scrutiny — but 'available' is undefined and does not commit to o…
This principle gives Google broad self-assessed discretion to determine what constitutes social benefit, with no defined methodology, timeline, or accountability mechanism.
While Google commits to reasonable advance notice for material changes — which is a positive consumer protection — continued use of Google services after changes take effect constitutes acceptance of…
Without actual legal provisions, consumers and compliance professionals cannot assess rights, obligations, or risks from this document.
If you cannot easily locate the terms governing your specific Klarna product, you may unknowingly agree to conditions that affect your rights and finances.
Providing terms in a user's native language is a meaningful step toward informed consent, but where only one language is available for a given country, users who do not speak that language may be agr…
This commitment has direct implications for the accessibility and equal availability of Microsoft AI products across diverse user populations, including users with disabilities who may rely on AI-pow…
Microsoft's stated support for AI regulation signals its policy positioning on emerging laws like the EU AI Act and US federal AI legislation, which may influence how those laws are written and what …
This commitment engages accessibility law obligations including the Americans with Disabilities Act and the European Accessibility Act, but the page does not specify how AI accessibility is tested or…
This mechanism indicates that OpenAI's usage policy is not entirely static — operators and users may be able to request modifications to default restrictions, which is relevant for enterprise and spe…
Tracking technologies collect behavioral and device data that builds a profile of your usage patterns, which may be shared with analytics and advertising partners.
The existence of a public reporting mechanism is relevant to OpenAI's compliance with the EU Digital Services Act's illegal content reporting requirements and demonstrates a basic trust and safety in…
Understanding IP ownership is important for organizations to ensure that proprietary business information shared on Slack remains their property and is not claimed by Slack.
Stripe uses your contact information collected during payment processing or account creation to send promotional messages, and you need to actively opt out rather than being opted out by default.
Create a free account and watch the platforms that matter to you. We'll email you the moment something changes.