Microsoft states that its AI systems should be understandable, with people having sufficient information to know when and how AI is being used and what its limitations are.
This analysis describes what Microsoft's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This principle addresses explainability and disclosure in AI systems, which is directly relevant to regulatory requirements around automated decision-making and the right to explanation under frameworks such as GDPR.
Interpretive note: The document text was not fully available for direct quotation; the principle is characterized based on publicly known content and the page's stated subject matter. Regulatory applicability depends heavily on the specific product and jurisdiction.
This is a stated design principle; it does not independently establish a right to explanation for consumers subject to automated decisions in Microsoft AI products. Applicable legal rights to explanation depend on the specific product, jurisdiction, and regulatory framework governing the use case.
How other platforms handle this
YOU MUST BE AND HEREBY AFFIRM THAT YOU ARE AN ADULT OF THE LEGAL AGE OF MAJORITY IN YOUR COUNTRY OR STATE OF RESIDENCE. If you are under the legal age of majority, your parent or legal guardian must consent to this agreement.
If you are a California resident, you may have certain rights under the California Consumer Privacy Act (CCPA). These rights may include: the right to know about personal information collected, disclosed, or sold; the right to delete personal information collected from you; the right to opt-out of t...
Our generative AI services are not directed at children. If you are under the applicable age of majority in your jurisdiction, you may only use these services with parental or guardian consent and supervision, subject to any additional restrictions set out in our family policies.
Monitoring
Microsoft has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
(1) REGULATORY LANDSCAPE: Transparency obligations for automated decision-making are addressed under GDPR Article 22 and Recital 71, which provide rights related to automated decisions with significant effects. The EU AI Act imposes transparency requirements for certain AI system categories. This policy statement does not satisfy the specific disclosure obligations under these frameworks. The FTC has also issued guidance on transparency in AI-driven commercial practices. (2) GOVERNANCE EXPOSURE: Medium for organizations deploying Microsoft AI in automated decision-making contexts that trigger GDPR Article 22 or EU AI Act transparency requirements, because reliance on this policy statement rather than product-level disclosures and consent mechanisms would be insufficient. (3) JURISDICTION FLAGS: EU/EEA deployments involving automated decisions with legal or significant effects create the highest exposure. US states with algorithmic transparency legislation also create heightened exposure for certain use cases. (4) CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers should verify that Microsoft's product-level documentation supports the transparency disclosures required by applicable law, and that data processing agreements address automated decision-making requirements. (5) COMPLIANCE CONSIDERATIONS: Organizations subject to GDPR Article 22 should conduct a separate assessment of whether their use of Microsoft AI products requires a data protection impact assessment and appropriate transparency notices to affected individuals.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This principle addresses explainability and disclosure in AI systems, which is directly relevant to regulatory requirements around automated decision-making and the right to explanation under frameworks such as GDPR.
This is a stated design principle; it does not independently establish a right to explanation for consumers subject to automated decisions in Microsoft AI products. Applicable legal rights to explanation depend on the specific product, jurisdiction, and regulatory framework governing the use case.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Microsoft.