Microsoft · Responsible AI

Fairness and Non-Discrimination Commitment

Medium severity
Share 𝕏 Share in Share

What it is

Microsoft commits that its AI systems should treat all people fairly and avoid discriminatory outcomes, particularly across groups defined by race, gender, age, disability, or other characteristics.

Why it matters

AI systems that are not fair can cause real harm — from biased hiring tools to discriminatory lending decisions — and this commitment signals Microsoft's intent to mitigate these risks.

Institutional analysis (Compliance & legal intelligence)

Fairness commitments in AI are increasingly scrutinized under anti-discrimination law (e.g. Title VII, ECOA, FCRA) and the EU AI Act's requirements for high-risk AI systems; compliance teams should verify product-level bias testing documentation.

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Consumer impact

This document describes Microsoft's voluntary ethical commitments for how it develops and deploys AI, including commitments to fairness, privacy, and transparency in its AI systems. For everyday consumers, this means Microsoft publicly asserts it designs AI with safety and inclusiveness in mind, though the document does not create enforceable legal rights for individual users. The practical impact on your data, finances, or safety depends on the specific Microsoft products you use and the separate terms and privacy policies governing them.

Applicable agencies

  • Federal Trade Commission (ftc)
    Oversees unfair or deceptive business practices and can investigate companies that mislead consumers about data collection, sharing, or use.
    Who can file: Anyone affected by the company's practices (US or international)
    What you need: Your account details, a timeline of relevant events, and a description of the specific issue
    What to expect: Complaints inform FTC enforcement priorities and investigations but do not result in individual resolution or compensation
    File a complaint →

Provision details

Document information
Document
Responsible AI
Entity
Microsoft
Document last updated
March 5, 2026
Tracking information
First tracked
March 5, 2026
Last verified
March 9, 2026
Record ID
CA-P-00003002
Document ID
CA-D-00003
Evidence Provenance
Source URL
Wayback Machine
SHA-256
aa3fee995909e642a2c39c8fed5902bd2185b49674da8449bd0dbad397a98b1c
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Microsoft | Document: Responsible AI | Record: CA-P-00003002
Captured: 2026-03-05 09:35:37 UTC | SHA-256: aa3fee995909e642…
URL: https://conductatlas.com/platform/microsoft/responsible-ai/fairness-and-non-discrimination-commitment/
Accessed: April 4, 2026
Classification
Severity
Medium
Categories

Other provisions in this document