As AI agents gain the ability to take actions with real-world consequences (deleting files, making purchases, sending emails), this provision attempts to ensure humans remain …
PayPal
· PayPal Privacy Statement
Automated decisions can affect your account access, transactions, and financial opportunities without human review, and your personal data is being used to train AI systems …
This license is broad and perpetual, meaning LinkedIn can use your professional content, name, image, and likeness to train AI models even after you delete …
Using customer financial and behavioral data to train AI systems without a clear opt-out is an emerging area of regulatory concern and may constitute secondary …
Your professional data and behavior on LinkedIn may be used to build AI systems that affect how content, jobs, and people are ranked and recommended …
AI bias in Microsoft products used for hiring, lending, healthcare, or law enforcement can cause material harm to protected groups, and this commitment signals Microsoft's …
Strava
· Strava Privacy Policy
The use of sensitive health and location data to train and run AI models introduces risks of opaque automated decision-making, potential processing beyond original purpose, …
Uber
· Uber Privacy Notice
Automated decisions can result in drivers losing access to their livelihood without transparent explanation or meaningful human review, which is both a significant economic risk …
Meta
· Meta Privacy Policy
Automated profiling can result in discriminatory ad delivery — for example, showing housing, employment, or financial ads only to certain demographic groups — and you …
Klarna
· Klarna Privacy Policy
Automated decisions can affect whether you can use Klarna's services or how much credit you are offered, and you may have the right to request …
Uber
· Uber Privacy Notice
Automated deactivation decisions can instantly end a driver's ability to earn income on the platform, and the policy does not clearly guarantee a meaningful right …
PayPal
· PayPal Privacy Statement
Automated decisions about fraud risk or credit can result in account limitations, payment blocks, or denial of services without human review, directly affecting your ability …
Uber
· Uber Privacy Notice
Background check data includes criminal history — some of the most sensitive personal information that exists — and automated or semi-automated decisions based on this …
OpenAI
· GPT-4o System Card (PDF)
This means OpenAI launched a publicly accessible AI model that its own safety team assessed as providing meaningful, if limited, assistance toward weapons of mass …
Adobe
· Adobe Privacy Policy
Content you consider private — documents, photos, creative work — stored on Adobe's servers is subject to automated and human review, which may raise confidentiality …
AI bias in hiring, lending, healthcare, or criminal justice can have life-altering consequences; this provision signals Google's awareness but does not specify how bias will …
Automated credit risk profiling using behavioral and third-party data — particularly where it influences financial decisions like loan eligibility or account standing — is subject …
Stripe
· Stripe Privacy Policy
Consumers have no visibility into or control over their inclusion in Stripe's cross-merchant fraud scoring system, which can result in declined transactions or account restrictions …
OpenAI
· GPT-4o System Card (PDF)
A medium cybersecurity uplift rating means GPT-4o can meaningfully help malicious actors create cyberweapons, and the only gate on deployment is OpenAI's own internal threshold …
The EU AI Act creates legally enforceable rights for people affected by high-risk AI systems, and Microsoft's commitment to comply means EU users in particular …
Microsoft
· Microsoft Responsible AI Principles
Algorithmic discrimination is a growing enforcement priority for regulators; if Microsoft AI systems produce discriminatory outcomes in employment, credit, housing, or healthcare contexts, affected users …
AI systems that discriminate can harm people's access to jobs, credit, housing, and services — this commitment is intended to prevent such outcomes across Microsoft's …
Stripe
· Stripe Privacy Policy
A fraud risk score assigned by Stripe could result in your payment being declined at any merchant using Stripe, without your knowledge of how the …
AI features embedded in a financial app carry heightened risk because inaccurate AI-generated financial guidance could cause real monetary harm, and the disclaimer of accuracy …
If you use a Claude-powered healthcare, legal, or financial app, the operator of that app is required by this policy to tell you that AI …
This commitment is directly relevant to consumers subject to AI-driven decisions in high-stakes contexts like employment screening, credit, healthcare, or law enforcement, where automated decisions …
Human oversight requirements mean that AI should not make final consequential decisions about your life without a person being accountable, which is a critical safeguard …
Adobe
· Adobe Privacy Policy
Your creative work, documents, and other content may be used to train or improve Adobe's AI and machine learning systems unless you actively opt out.
Sensitive AI use cases, particularly in law enforcement and surveillance, carry significant civil liberties implications; Microsoft's review process is meant to prevent harmful deployments.
Uber
· Uber Privacy Notice
Telematics data is used to make decisions about your driver account, including potential deactivation. You have limited visibility into how these scores are calculated and …