Found in 25 of 170 platforms tracked (15% adoption) · 67 provisions
Your creative work, documents, and other content may be used to train or improve Adobe's AI and machine learning systems unless you actively opt out.
Content you consider private — documents, photos, creative work — stored on Adobe's servers is subject to automated and human review, which may raise confidentiality concerns for professional or sens…
As AI agents gain the ability to take actions with real-world consequences (deleting files, making purchases, sending emails), this provision attempts to ensure humans remain in control — but enforce…
If you use a Claude-powered healthcare, legal, or financial app, the operator of that app is required by this policy to tell you that AI is not a substitute for licensed professional advice — but enf…
Using customer financial and behavioral data to train AI systems without a clear opt-out is an emerging area of regulatory concern and may constitute secondary processing beyond what users reasonably…
Automated credit risk profiling using behavioral and third-party data — particularly where it influences financial decisions like loan eligibility or account standing — is subject to FCRA adverse act…
Automated decisions can affect whether you can use Klarna's services or how much credit you are offered, and you may have the right to request human review of decisions that negatively affect you.
Your professional data and behavior on LinkedIn may be used to build AI systems that affect how content, jobs, and people are ranked and recommended across the platform.
Automated profiling can result in discriminatory ad delivery — for example, showing housing, employment, or financial ads only to certain demographic groups — and you may have limited visibility into…
Sensitive AI use cases, particularly in law enforcement and surveillance, carry significant civil liberties implications; Microsoft's review process is meant to prevent harmful deployments.
Human oversight requirements mean that AI should not make final consequential decisions about your life without a person being accountable, which is a critical safeguard for consumers.
The EU AI Act creates legally enforceable rights for people affected by high-risk AI systems, and Microsoft's commitment to comply means EU users in particular gain concrete legal protections.
AI systems that discriminate can harm people's access to jobs, credit, housing, and services — this commitment is intended to prevent such outcomes across Microsoft's AI products.
This means OpenAI launched a publicly accessible AI model that its own safety team assessed as providing meaningful, if limited, assistance toward weapons of mass destruction, relying on classifiers …
Unlike previous text-based AI, GPT-4o's voice outputs are designed to sound emotionally resonant, which OpenAI's own safety team identified as a vector for user manipulation and over-reliance that wa…
Automated decisions can affect your account access, transactions, and financial opportunities without human review, and your personal data is being used to train AI systems that may serve PayPal's br…
The use of sensitive health and location data to train and run AI models introduces risks of opaque automated decision-making, potential processing beyond original purpose, and exposure to sub-proces…
Consumers have no visibility into or control over their inclusion in Stripe's cross-merchant fraud scoring system, which can result in declined transactions or account restrictions based on activity …
Automated deactivation decisions can instantly end a driver's ability to earn income on the platform, and the policy does not clearly guarantee a meaningful right to human review of such decisions in…
The phrase 'if any' signals that Anthropic makes no guarantee that AI-generated outputs are copyrightable, which means you may not own the content Claude generates for you in any legally enforceable …
Automated profiling can significantly affect your experience on the platform — including who you can connect with — and raises rights under GDPR Article 22 to not be subject to solely automated decis…
Embedding generative AI into a financial platform creates significant risk for consumers who may rely on AI-generated financial guidance without understanding its limitations — particularly in a cont…
User inputs to AI features may contain sensitive personal information, and the policy does not specify how long input data is retained, whether it is used to train AI models, or whether it is shared …
Profiling can affect what opportunities, services, or prices you're shown on the platform, and raises concerns about fairness and transparency in automated decision-making.
These principles define Google's stated standard of care for AI development, which could be used as a benchmark in regulatory investigations or litigation if Google's AI products cause harm.
As Google deploys AI in consequential contexts including health information, navigation, financial tools, and communication, safety failures in AI outputs could cause direct physical or financial har…
This is the most concrete commitment in the document — it defines a floor below which Google says it will not go — but it is self-policed with no external verification.
Algorithmic bias in Google's AI systems — including Search AI Mode, Gemini, and automated decision tools — can cause real-world harm to individuals in employment, credit, healthcare, and information …
This commitment describes how Google intends to handle personal data used to train and operate AI systems — but without specifying which data types, what 'notice and control' means in practice, or ho…
This provision prevents users and developers from building AI tools that access Hinge data, but it also protects user content from being scraped or processed by unauthorized third-party AI systems.
Automated systems shape who you see and who sees you on Hinge, and EEA/UK users have the right under GDPR to contest automated decisions that significantly affect them.
Automated fraud risk assessments can affect your ability to use Klarna's services or make purchases, and data shared with fraud prevention networks may persist for extended periods.
The quality and provenance of training data directly determines whether AI systems are fair, accurate, and respectful of privacy — poor data governance is a primary source of AI harm.
Transparency in AI is important because it allows people to understand why an AI made a decision that affected them, which is particularly relevant when AI is used in consequential areas like healthc…
AI bias can lead to discriminatory outcomes in areas like hiring, lending, healthcare, and criminal justice — Microsoft's public commitment to fairness is relevant to consumers who may be affected by…
Human oversight is a critical safeguard ensuring that automated AI systems do not make consequential decisions — such as those affecting health, finances, or safety — without human review and account…
These principles set the baseline standard for how Microsoft AI systems that affect your life — from job applications screened by AI to healthcare tools — are supposed to behave, though they are volu…
Internal governance structures determine whether AI commitments are enforced in practice or remain aspirational, directly affecting the reliability of protections consumers and enterprises depend on.
AI systems often process large amounts of personal data; committing to privacy by design means Microsoft says it builds in data protection from the start rather than as an afterthought.
Human oversight is a critical safeguard against AI errors and harms, especially in high-stakes areas like healthcare, legal proceedings, and financial decisions.
Biased AI can affect hiring decisions, credit scoring, healthcare access, and other high-stakes outcomes; Microsoft's commitment to fairness is a signal that these risks are being actively managed.
Consumers in sensitive sectors — such as patients using AI-powered health tools or customers using AI-driven financial products — should know that Microsoft acknowledges heightened responsibilities i…
These principles are the foundation of how Microsoft designs AI tools that consumers use, including Copilot, Bing, and Azure AI — they signal what safeguards are intended to be built in.
Transparency means you should know when you are interacting with AI and be able to understand why it made a decision that affects you, which is essential for accountability.
Transparency allows users, regulators, and auditors to scrutinize AI behavior and hold Microsoft accountable when AI produces harmful or unexplained outcomes.
This standard is one of the most concrete governance tools described in the document; it sets product-level requirements rather than just aspirational principles.
This commitment is directly relevant to consumers because AI products process significant amounts of personal data; the stated principle of privacy-by-design is a meaningful standard if implemented c…
This means the capabilities and restrictions you experience when using an OpenAI-powered product may differ significantly from platform to platform, depending on how that operator has configured the …
GPT-4o's ability to assist with code generation, vulnerability analysis, and technical problem-solving means it has inherent dual-use cybersecurity potential that OpenAI acknowledges but has decided …
This is a direct acknowledgment that GPT-4o's safety behaviors are not fully robust to manipulation, which has implications for any deployment context where the model may encounter adversarial users …
The entire safety assurance framework governing GPT-4o's release is self-administered, meaning there is no independent verification that OpenAI's risk ratings or mitigations are accurate or sufficien…
This means that when you interact with a GPT-4o-powered application, the safety settings you experience may be significantly different from ChatGPT's defaults — the business operating that app may ha…
As Salesforce integrates AI (including autonomous agents) into its products, this framework governs guardrails around AI behavior — which matters to any business relying on Salesforce AI for decision…
These inferences form the basis for personalization and advertising targeting, meaning Spotify may act on assumptions about you that could be inaccurate, and you have limited visibility into what inf…
Profiling is active by default, meaning your data is continuously analyzed to shape what content and offers you see — you must actively turn this off to stop it.
Automated decisions can affect your ability to use the service and who you are matched with, without meaningful human review or explanation of the criteria used.
Users may be unknowingly enrolled in experiments that affect their experience on the platform, including changes to content visibility, feed ranking, or feature availability.
AI processing of personal data raises additional privacy risks including inferences about sensitive characteristics and potential for automated decision-making affecting users.
This commitment to scientific openness and peer collaboration is relevant to evaluating whether Google's AI safety and bias claims are independently verifiable or purely self-assessed.
This principle establishes that Google intends its AI systems to have human oversight mechanisms — which is directly relevant to anyone who wants to challenge or correct an AI-generated decision that…
This standard is referenced as the operational implementation of Microsoft's ethical principles, meaning it governs the actual development process of AI products consumers use.
By releasing these tools publicly, Microsoft creates a verifiable and auditable basis for its fairness and transparency claims that external researchers and regulators can independently evaluate.
Having a dedicated governance body signals that Microsoft takes AI accountability seriously, and its existence is relevant for enterprise buyers assessing whether a vendor has adequate AI risk manage…
The existence of a senior-level ethics advisory body suggests that AI ethics questions are escalated to decision-makers, which is a positive signal for institutional buyers assessing AI governance ma…
This governance structure is relevant because it indicates there is an internal body responsible for enforcing AI ethics standards — but it is an internal function, not an independent or external ove…
External red teaming is considered a best practice in AI safety, and its inclusion demonstrates a meaningful (if not independently verified) pre-deployment safety process — though the selection of re…
While fraud prevention is beneficial, automated monitoring of financial transactions involves significant surveillance of personal activity and may result in account actions based on automated decisi…
Create a free account and watch the platforms that matter to you. We'll email you the moment something changes.