Microsoft commits to testing its AI systems to make sure they do not unfairly discriminate against people based on race, gender, disability, age, or other protected characteristics.
This provision means Microsoft is committed to testing AI for discriminatory outcomes before and during deployment, which should reduce the risk that AI-powered Microsoft products treat you unfairly based on your race, gender, age, or disability status — particularly in consequential decisions.
Cross-platform context
See how other platforms handle Fairness and Non-Discrimination in AI and similar clauses.
Compare across platforms →AI bias can replicate and amplify existing discrimination at scale — without active fairness commitments and testing, AI systems used in hiring, credit, healthcare, and other high-stakes contexts can systematically disadvantage protected groups.
REGULATORY FRAMEWORK: Fairness and non-discrimination commitments engage EU AI Act Art. 10(3) (training data requirements to address bias) and Art. 9(7) (fairness testing for high-risk AI). In the US, Title VII of the Civil Rights Act, the Equal Credit Opportunity Act (ECOA, 15 U.S.C. § 1691), and the Fair Housing Act apply to AI-driven decisions in employment, credit, and housing. CFPB guidance on AI credit decisioning (2022) explicitly addresses algorithmic bias. EEOC guidance on AI in employment (2023) establishes employer liability for discriminatory AI tools.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.