Microsoft commits to developing AI systems that treat all people fairly and avoid affecting similarly situated groups of people in different ways, particularly regarding consequential uses of AI.
If a Microsoft AI product makes or influences a decision that negatively affects you — such as in hiring, lending, or healthcare — this fairness commitment does not give you a right to appeal, explanation, or remedy under this document.
Cross-platform context
See how other platforms handle AI Fairness Commitment and similar clauses.
Compare across platforms →For consumers who may be affected by AI-driven decisions in areas like credit, employment screening, healthcare, or content moderation, the absence of a binding fairness obligation means there is no mechanism to challenge an unfair AI outcome through this document.
(1) REGULATORY FRAMEWORK: GDPR Art. 22 restricts solely automated decision-making with significant effects on data subjects and requires a lawful basis, human review upon request, and the right to contest such decisions. CCPA/CPRA §1798.185 requires disclosure of automated decision-making logic and opt-out rights (effective 2023). The Equal Credit Opportunity Act (ECOA) and Fair Housing Act prohibit algorithmic discrimination in lending and housing. The NYC Automated Employment Decision Tools Law (Local Law 144) requires bias audits for AI hiring tools. Enforcement: CFPB (credit), EEOC (employment), HUD (housing), State AGs. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.