Microsoft · Microsoft Responsible AI Principles

AI Fairness Commitment

Medium severity
Share 𝕏 Share in Share 🔒 PDF

What it is

Microsoft commits to developing AI systems that treat all people fairly and avoid affecting similarly situated groups of people in different ways, particularly regarding consequential uses of AI.

Consumer impact (what this means for users)

If a Microsoft AI product makes or influences a decision that negatively affects you — such as in hiring, lending, or healthcare — this fairness commitment does not give you a right to appeal, explanation, or remedy under this document.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Delete Your Data
    Within 30 days
    Visit Microsoft's Privacy Request portal, select your region, and submit a request to access, correct, or delete personal data that may have been used in automated AI-driven decisions affecting you.

Cross-platform context

See how other platforms handle AI Fairness Commitment and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

For consumers who may be affected by AI-driven decisions in areas like credit, employment screening, healthcare, or content moderation, the absence of a binding fairness obligation means there is no mechanism to challenge an unfair AI outcome through this document.

Institutional analysis (Compliance & legal intelligence)

(1) REGULATORY FRAMEWORK: GDPR Art. 22 restricts solely automated decision-making with significant effects on data subjects and requires a lawful basis, human review upon request, and the right to contest such decisions. CCPA/CPRA §1798.185 requires disclosure of automated decision-making logic and opt-out rights (effective 2023). The Equal Credit Opportunity Act (ECOA) and Fair Housing Act prohibit algorithmic discrimination in lending and housing. The NYC Automated Employment Decision Tools Law (Local Law 144) requires bias audits for AI hiring tools. Enforcement: CFPB (credit), EEOC (employment), HUD (housing), State AGs. (2)

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    The FTC has authority over algorithmic fairness and bias in consumer-facing AI systems under Section 5 of the FTC Act, and has issued specific guidance on AI fairness obligations.
    File a complaint →

Provision details

Document information
Document
Microsoft Responsible AI Principles
Entity
Microsoft
Document last updated
April 29, 2026
Tracking information
First tracked
April 27, 2026
Last verified
April 27, 2026
Record ID
CA-P-003197
Document ID
CA-D-00019
Evidence Provenance
Source URL
Wayback Machine
SHA-256
77bc43a7f84410902fdbac1b71574e6a146d5315f383cd6ee7ecdd0ee54cd259
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Microsoft | Document: Microsoft Responsible AI Principles | Record: CA-P-003197
Captured: 2026-04-27 09:59:26 UTC | SHA-256: 77bc43a7f8441090…
URL: https://conductatlas.com/platform/microsoft/microsoft-responsible-ai-principles/ai-fairness-commitment/
Accessed: May 2, 2026
Classification
Severity
Medium
Categories

Other provisions in this document