Microsoft states that its AI systems should treat all people fairly and avoid affecting similarly situated groups of people in different ways.
This analysis describes what Microsoft's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This principle describes Microsoft's stated commitment to non-discriminatory AI outcomes, which is relevant to consumers who may be subject to AI-assisted decisions in products such as hiring tools, credit assessments, or content moderation systems.
Interpretive note: The page does not provide verbatim contractual language; the principle is described in editorial web content rather than a formal policy document with defined obligations.
This provision is a policy statement rather than a contractual right; it does not grant consumers a legally enforceable claim if they believe a Microsoft AI product produced an unfair outcome. Consumers seeking recourse for discriminatory AI decisions would need to look to applicable law and product-specific terms.
How other platforms handle this
TO THE MAXIMUM EXTENT PERMITTED BY LAW, NEITHER WHATNOT NOR ITS SERVICE PROVIDERS INVOLVED IN CREATING, PRODUCING, OR DELIVERING THE SERVICES WILL BE LIABLE FOR ANY INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES, OR DAMAGES FOR LOST PROFITS, LOST REVENUES, LOST SAVINGS, LOST BUSINESS OPPORT...
In no event will either party's aggregate liability arising out of or related to this Agreement exceed the total fees paid or payable by Customer in the twelve (12) months preceding the claim. In no event will either party be liable for any indirect, incidental, special, consequential, or punitive d...
Except as stated in Section L.3.b, the liability of each party, and its affiliates and licensors, for any damages arising out of or related to these Terms (i) excludes damages that are consequential, incidental, special, indirect, or exemplary damages, including lost profits, business, contracts, re...
Monitoring
Microsoft has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
(1) REGULATORY LANDSCAPE: Fairness and non-discrimination obligations in automated systems are addressed under GDPR Article 22, the EU AI Act's requirements for high-risk AI systems, and US FTC guidance on algorithmic fairness. The EEOC and CFPB have also issued guidance on AI fairness in employment and credit contexts respectively. This policy statement does not itself satisfy any of these regulatory obligations, which require operational controls rather than public commitments. (2) GOVERNANCE EXPOSURE: Low. As a public policy statement, this provision does not create direct compliance exposure on its own. However, if Microsoft AI products are deployed in regulated contexts such as hiring, lending, or healthcare, the gap between this stated principle and demonstrated product-level fairness controls could be relevant in regulatory examinations or litigation. (3) JURISDICTION FLAGS: EU/EEA organizations deploying Microsoft AI in high-risk categories under the EU AI Act face heightened exposure if they rely on vendor policy statements rather than documented conformity assessments. US organizations in financial services and employment contexts should assess alignment with CFPB and EEOC AI guidance. (4) CONTRACT AND VENDOR IMPLICATIONS: Procurement teams should not treat this public policy statement as a contractual fairness warranty. Enterprise agreements and data processing addenda should be reviewed for any product-level fairness commitments. (5) COMPLIANCE CONSIDERATIONS: Organizations using Microsoft AI in regulated decision-making contexts should request product-specific documentation on bias testing, fairness metrics, and audit processes rather than relying on this page.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This principle describes Microsoft's stated commitment to non-discriminatory AI outcomes, which is relevant to consumers who may be subject to AI-assisted decisions in products such as hiring tools, credit assessments, or content moderation systems.
This provision is a policy statement rather than a contractual right; it does not grant consumers a legally enforceable claim if they believe a Microsoft AI product produced an unfair outcome. Consumers seeking recourse for discriminatory AI decisions would need to look to applicable law and product-specific terms.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Microsoft.