Microsoft · Microsoft Responsible AI Principles

Fairness Principle

Medium severity
Share š• Share in Share

What it is

AI systems should treat all people fairly and avoid affecting similarly situated groups of people in different ways. For example, when AI systems are used in consequential scenarios like medical triage, loan applications, or hiring, they should make recommendations that are consistent for all people with similar symptoms, financial situations, or professional qualifications.

Why it matters

Discriminatory AI outcomes in lending, hiring, or healthcare can cause serious real-world harm; this principle creates an expectation but no enforcement mechanism for affected individuals.

Consumer impact

This document describes Microsoft's internal ethical framework for AI and does not directly alter consumer data rights, impose fees, or restrict legal recourse — it is a voluntary policy statement. Consumers using Microsoft AI products such as Copilot or Azure OpenAI Service are subject to separate, binding Terms of Service and Privacy Policies that govern data collection, use, and sharing. You can review Microsoft's binding Privacy Statement at https://privacy.microsoft.com to understand what data Microsoft actually collects and how it is used.

Applicable agencies

  • CFPB
    The CFPB has enforcement authority over algorithmic discrimination in credit and lending decisions, which this fairness principle directly addresses.
    File a complaint →
  • FTC
    The FTC has authority to pursue unfair or deceptive practices including discriminatory AI system outcomes and misrepresentations about fairness in AI.
    File a complaint →

Provision details

Document information
Document
Microsoft Responsible AI Principles
Entity
Microsoft
Document last updated
March 24, 2026
Tracking information
First tracked
March 15, 2026
Last verified
April 9, 2026
Record ID
CA-P-002529
Document ID
CA-D-00019
Evidence Provenance
Source URL
Wayback Machine
SHA-256
23a6ca38df3168c404a699491ed0c3613a9ae7c2bd3913347338d808de2ae94a
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Microsoft | Document: Microsoft Responsible AI Principles | Record: CA-P-002529
Captured: 2026-03-15 11:24:32 UTC | SHA-256: 23a6ca38df3168c4…
URL: https://conductatlas.com/platform/microsoft/microsoft-responsible-ai-principles/fairness-principle/
Accessed: April 13, 2026
Classification
Severity
Medium
Categories

Other provisions in this document