Microsoft
· Microsoft Responsible AI Principles
This commitment engages accessibility law obligations including the Americans with Disabilities Act and the European Accessibility Act, but the page does not specify how AI …
Having a dedicated governance body signals that Microsoft takes AI accountability seriously, and its existence is relevant for enterprise buyers assessing whether a vendor has …
Microsoft
· Microsoft Responsible AI Principles
This governance structure is relevant to enterprise and government customers who need to assess whether Microsoft has adequate internal controls for AI risk management, but …
Microsoft
· Microsoft Responsible AI Principles
This standard is referenced as the operational implementation of Microsoft's ethical principles, meaning it governs the actual development process of AI products consumers use.
By releasing these tools publicly, Microsoft creates a verifiable and auditable basis for its fairness and transparency claims that external researchers and regulators can independently …
This commitment to scientific openness and peer collaboration is relevant to evaluating whether Google's AI safety and bias claims are independently verifiable or purely self-assessed.
Microsoft's stated support for AI regulation signals its policy positioning on emerging laws like the EU AI Act and US federal AI legislation, which may …
This provision implies Google will publish research and engage with independent researchers, which is important for external AI safety scrutiny — but 'available' is undefined …
As AI agents gain the ability to take actions with real-world consequences (deleting files, making purchases, sending emails), this provision attempts to ensure humans remain …
PayPal
· PayPal Privacy Statement
Automated decisions can affect your account access, transactions, and financial opportunities without human review, and your personal data is being used to train AI systems …
This license is broad and perpetual, meaning LinkedIn can use your professional content, name, image, and likeness to train AI models even after you delete …
Using customer financial and behavioral data to train AI systems without a clear opt-out is an emerging area of regulatory concern and may constitute secondary …
Your professional data and behavior on LinkedIn may be used to build AI systems that affect how content, jobs, and people are ranked and recommended …
AI bias in Microsoft products used for hiring, lending, healthcare, or law enforcement can cause material harm to protected groups, and this commitment signals Microsoft's …
Strava
· Strava Privacy Policy
The use of sensitive health and location data to train and run AI models introduces risks of opaque automated decision-making, potential processing beyond original purpose, …
Uber
· Uber Privacy Notice
Automated decisions can result in drivers losing access to their livelihood without transparent explanation or meaningful human review, which is both a significant economic risk …
Meta
· Meta Privacy Policy
Automated profiling can result in discriminatory ad delivery — for example, showing housing, employment, or financial ads only to certain demographic groups — and you …
Uber
· Uber Privacy Notice
Automated deactivation decisions can instantly end a driver's ability to earn income on the platform, and the policy does not clearly guarantee a meaningful right …
Klarna
· Klarna Privacy Policy
Automated decisions can affect whether you can use Klarna's services or how much credit you are offered, and you may have the right to request …
PayPal
· PayPal Privacy Statement
Automated decisions about fraud risk or credit can result in account limitations, payment blocks, or denial of services without human review, directly affecting your ability …
Uber
· Uber Privacy Notice
Background check data includes criminal history — some of the most sensitive personal information that exists — and automated or semi-automated decisions based on this …
OpenAI
· GPT-4o System Card (PDF)
This means OpenAI launched a publicly accessible AI model that its own safety team assessed as providing meaningful, if limited, assistance toward weapons of mass …
Adobe
· Adobe Privacy Policy
Content you consider private — documents, photos, creative work — stored on Adobe's servers is subject to automated and human review, which may raise confidentiality …
AI bias in hiring, lending, healthcare, or criminal justice can have life-altering consequences; this provision signals Google's awareness but does not specify how bias will …
Automated credit risk profiling using behavioral and third-party data — particularly where it influences financial decisions like loan eligibility or account standing — is subject …
Stripe
· Stripe Privacy Policy
Consumers have no visibility into or control over their inclusion in Stripe's cross-merchant fraud scoring system, which can result in declined transactions or account restrictions …
OpenAI
· GPT-4o System Card (PDF)
A medium cybersecurity uplift rating means GPT-4o can meaningfully help malicious actors create cyberweapons, and the only gate on deployment is OpenAI's own internal threshold …
The EU AI Act creates legally enforceable rights for people affected by high-risk AI systems, and Microsoft's commitment to comply means EU users in particular …
Microsoft
· Microsoft Responsible AI Principles
Algorithmic discrimination is a growing enforcement priority for regulators; if Microsoft AI systems produce discriminatory outcomes in employment, credit, housing, or healthcare contexts, affected users …
AI systems that discriminate can harm people's access to jobs, credit, housing, and services — this commitment is intended to prevent such outcomes across Microsoft's …