Microsoft · Responsible AI

AI Fairness Across Demographic Groups

High severity
Share 𝕏 Share in Share

What it is

Fairness: AI systems should treat all people fairly. We work to proactively detect and mitigate unfair bias in our AI systems so that all individuals are treated fairly regardless of their personal characteristics.

Why it matters

AI bias in Microsoft products used for hiring, lending, healthcare, or law enforcement can cause material harm to protected groups, and this commitment signals Microsoft's recognition of that risk — though it does not provide consumers with a direct remedy.

Consumer impact

Microsoft's Responsible AI framework sets out the ethical principles — fairness, reliability, privacy, security, inclusiveness, transparency, and accountability — that govern how AI is built and deployed across all Microsoft products used by consumers. While these commitments signal meaningful intent, they are voluntary and do not create legally enforceable rights for individual users, meaning consumers harmed by AI decisions have limited direct recourse under this document alone. You can submit feedback or concerns about Microsoft AI systems through the dedicated responsible AI resources linked at microsoft.com/en-us/ai/responsible-ai.

Applicable agencies

  • FTC
    The FTC has issued guidance on AI bias as an unfair or deceptive practice under Section 5 and can investigate Microsoft AI tools that produce discriminatory outcomes inconsistent with its fairness commitments.
    File a complaint →
  • CFPB
    The CFPB has jurisdiction over AI bias in credit decisions under ECOA and the Fair Credit Reporting Act, applicable to Microsoft AI tools used by financial institutions.
    File a complaint →

Provision details

Document information
Document
Responsible AI
Entity
Microsoft
Document last updated
March 5, 2026
Tracking information
First tracked
March 15, 2026
Last verified
April 4, 2026
Record ID
CA-P-002072
Document ID
CA-D-00003
Evidence Provenance
Source URL
Wayback Machine
SHA-256
de99fca7fd2ebd374c7f5dd22d7ff57569e2321c88c91f75c4f9e17147793b07
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Microsoft | Document: Responsible AI | Record: CA-P-002072
Captured: 2026-03-15 11:09:49 UTC | SHA-256: de99fca7fd2ebd37…
URL: https://conductatlas.com/platform/microsoft/responsible-ai/ai-fairness-across-demographic-groups/
Accessed: April 4, 2026
Classification
Severity
High
Categories

Other provisions in this document