Microsoft · Responsible AI

AI Fairness and Non-Discrimination Commitment

High severity
Share 𝕏 Share in Share

What it is

Microsoft commits that its AI systems will not discriminate based on characteristics like race, gender, age, or disability, and has built tools to help developers implement this.

Consumer impact (what this means for users)

This provision directly affects consumers who interact with Microsoft AI systems in high-stakes contexts such as employment screening, credit assessment, or healthcare diagnosis, where algorithmic bias based on race, gender, age, or disability could produce discriminatory outcomes without legal recourse under this framework.

How other platforms handle this

Pinterest Medium

To the extent permitted by applicable law, the Service and all content on Pinterest is provided on an "as is" basis without warranty of any kind, whether express or implied. Pinterest specifically disclaims any and all warranties and conditions of merchantability, fitness for a particular purpose, a...

OpenAI Medium

We implement technical, administrative, and organizational measures designed to protect your Personal Data against unauthorized access, loss, destruction, or alteration. However, no internet transmission or electronic storage is completely secure, and we cannot guarantee absolute security.

Google Medium

Avoid creating or reinforcing unfair bias. AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularl...

See all platforms with this clause type →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

AI bias in consequential decisions — such as hiring, lending, or healthcare — can cause real harm, and this commitment is important, but it is a voluntary pledge without a consumer complaint mechanism or independent enforcement.

View original clause language
AI systems should treat all people fairly. We've developed tools and guidance, such as Fairlearn, to help developers understand and address unfairness in AI systems. AI systems should not make decisions that discriminate against people or treat them unfairly on the basis of characteristics such as race, gender, age, or disability.

Institutional analysis (Compliance & legal intelligence)

(1) REGULATORY FRAMEWORK: AI fairness and non-discrimination obligations are legally mandated under the EU AI Act Art. 10 (data governance for high-risk AI systems) and Art. 15 (accuracy, robustness, cybersecurity). US Fair Housing Act, Equal Credit Opportunity Act (ECOA), and Title VII of the Civil Rights Act apply to AI-driven decisions in housing, lending, and employment respectively. CFPB has issued guidance on algorithmic decision-making in credit (2022 CFPB Circular on ECOA). EEOC issued guidance on AI in employment decisions (2023). (2)

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    FTC Act Section 5 applies to unfair or discriminatory AI practices affecting consumers, and the FTC has issued guidance specifically on AI bias and algorithmic decision-making.
    File a complaint →
  • CFPB
    CFPB has jurisdiction over algorithmic discrimination in credit, lending, and financial services decisions made using AI systems, under ECOA and CFPB supervisory authority.
    File a complaint →

Provision details

Document information
Document
Responsible AI
Entity
Microsoft
Document last updated
March 5, 2026
Tracking information
First tracked
March 15, 2026
Last verified
April 9, 2026
Record ID
CA-P-002514
Document ID
CA-D-00003
Evidence Provenance
Source URL
Wayback Machine
SHA-256
de99fca7fd2ebd374c7f5dd22d7ff57569e2321c88c91f75c4f9e17147793b07
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Microsoft | Document: Responsible AI | Record: CA-P-002514
Captured: 2026-03-15 11:09:49 UTC | SHA-256: de99fca7fd2ebd37…
URL: https://conductatlas.com/platform/microsoft/responsible-ai/ai-fairness-and-non-discrimination-commitment/
Accessed: April 29, 2026
Classification
Severity
High
Categories

Other provisions in this document