Microsoft commits to ensuring that people understand how its AI systems work, including publishing information about AI capabilities and limitations and disclosing when AI is being used.
Even though Microsoft promises AI transparency, this document does not tell you specifically when you are interacting with AI, what model is making decisions about you, or what its limitations are in any given product context.
Cross-platform context
See how other platforms handle Transparency in AI Commitment and similar clauses.
Compare across platforms →Transparency about AI decision-making is increasingly required by law in multiple jurisdictions, and Microsoft's voluntary commitment on this page does not satisfy the specific disclosure requirements mandated by regulations like the EU AI Act or GDPR Art. 22.
(1) REGULATORY FRAMEWORK: GDPR Art. 13-14 require disclosure of automated decision-making logic and the existence of profiling to data subjects. GDPR Art. 22(3) requires meaningful information about the logic involved in automated decisions. The EU AI Act Art. 13 mandates transparency obligations for high-risk AI system providers. The FTC Act Section 5 covers deceptive AI disclosure practices. Colorado, Connecticut, and Texas AI transparency laws (effective 2024-2026) require disclosure of consequential AI decisions. Enforcement: national DPAs, European AI Office, FTC, state AGs. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.