Microsoft commits to making its AI systems explainable so that people can understand how AI decisions that affect them are made.
This commitment means Microsoft aims to make its AI decision-making understandable, but under this document alone you have no legal right to demand an explanation for an AI decision affecting you — that right exists only under GDPR Art. 22 for EU users or specific product terms.
Cross-platform context
See how other platforms handle Transparency in AI and similar clauses.
Compare across platforms →The right to understand AI decision-making is a key consumer protection — particularly for consequential decisions about credit, employment, or healthcare — and this commitment aligns with but does not substitute for legal rights to explanation under GDPR and emerging US laws.
(1) REGULATORY FRAMEWORK: AI transparency obligations are legally mandated under GDPR Art. 22 (right to explanation for automated decisions with significant effects), EU AI Act Arts. 13-14 (transparency and human oversight requirements for high-risk AI, including logging and documentation obligations), CCPA/CPRA (right to know about automated decision-making), and emerging US state AI transparency laws (Colorado AI Act, Virginia CDPA). The EU AI Office, EU DPAs, and California Privacy Protection Agency are key enforcement authorities. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.