Microsoft · Responsible AI Report 2025

Fairness and Non-Discrimination in AI

High severity
Share 𝕏 Share in Share 🔒 PDF

What it is

Microsoft commits to testing its AI systems to make sure they do not unfairly discriminate against people based on race, gender, disability, age, or other protected characteristics.

Consumer impact (what this means for users)

This provision means Microsoft is committed to testing AI for discriminatory outcomes before and during deployment, which should reduce the risk that AI-powered Microsoft products treat you unfairly based on your race, gender, age, or disability status — particularly in consequential decisions.

Cross-platform context

See how other platforms handle Fairness and Non-Discrimination in AI and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

AI bias can replicate and amplify existing discrimination at scale — without active fairness commitments and testing, AI systems used in hiring, credit, healthcare, and other high-stakes contexts can systematically disadvantage protected groups.

View original clause language
Microsoft commits to designing and evaluating AI systems to identify and mitigate unfair bias, ensure equitable outcomes across demographic groups, and avoid AI-generated discrimination on the basis of protected characteristics including race, gender, disability, age, and other legally protected attributes.

Institutional analysis (Compliance & legal intelligence)

REGULATORY FRAMEWORK: Fairness and non-discrimination commitments engage EU AI Act Art. 10(3) (training data requirements to address bias) and Art. 9(7) (fairness testing for high-risk AI). In the US, Title VII of the Civil Rights Act, the Equal Credit Opportunity Act (ECOA, 15 U.S.C. § 1691), and the Fair Housing Act apply to AI-driven decisions in employment, credit, and housing. CFPB guidance on AI credit decisioning (2022) explicitly addresses algorithmic bias. EEOC guidance on AI in employment (2023) establishes employer liability for discriminatory AI tools.

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • CFPB
    CFPB has enforcement authority over AI-driven credit and financial decisioning under ECOA and the Fair Credit Reporting Act where algorithmic bias produces discriminatory outcomes.
    File a complaint →
  • FTC
    FTC has authority to challenge discriminatory AI practices as unfair acts under Section 5 of the FTC Act, particularly in consumer-facing AI applications.
    File a complaint →

Provision details

Document information
Document
Responsible AI Report 2025
Entity
Microsoft
Document last updated
March 5, 2026
Tracking information
First tracked
March 5, 2026
Last verified
April 27, 2026
Record ID
CA-P-003122
Document ID
CA-D-00004
Evidence Provenance
Source URL
Wayback Machine
SHA-256
99c61ee37f0300e932720498b6db37eb5eaf309ded7c40585a2fd7f70c4ce999
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Microsoft | Document: Responsible AI Report 2025 | Record: CA-P-003122
Captured: 2026-03-05 09:35:48 UTC | SHA-256: 99c61ee37f0300e9…
URL: https://conductatlas.com/platform/microsoft/responsible-ai-report-2025/fairness-and-non-discrimination-in-ai/
Accessed: May 2, 2026
Classification
Severity
High
Categories

Other provisions in this document