Microsoft says its AI systems should treat all people fairly and not reinforce or create biases — this applies to how AI makes decisions that affect users.
This commitment means Microsoft is publicly accountable for building AI that does not discriminate — but if you are harmed by a biased AI decision in a Microsoft product, this document alone does not give you a legal remedy.
Cross-platform context
See how other platforms handle AI Fairness Commitment and similar clauses.
Compare across platforms →AI bias in systems like hiring tools, credit scoring, or content moderation can cause real harm to individuals, and Microsoft's commitment here sets expectations — though without legal enforceability.
(1) REGULATORY FRAMEWORK: AI fairness directly implicates Title VII of the Civil Rights Act and the Equal Credit Opportunity Act (ECOA) where AI is used in employment or credit decisions; GDPR Art. 22 (automated individual decision-making, including profiling); EU AI Act Art. 10 (data governance and bias mitigation for high-risk AI); and FTC Act Section 5 (algorithmic bias as unfair practice). EEOC and CFPB have enforcement authority in US employment and credit contexts respectively; EU AI Office enforces under the AI Act. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.