8 Total
4 High severity
4 Medium severity
0 Low severity
Summary

This is Microsoft's official AI governance document, setting out the rules Microsoft applies to how it builds and uses artificial intelligence across products like Copilot, Azure AI, and Bing. The most important thing for everyday users is that Microsoft commits to keeping humans in control of high-stakes AI decisions and to being transparent about when AI is involved in interactions affecting you. You should review the privacy settings in your Microsoft account to understand what data is used to train or personalize AI features you interact with.

Technical Summary

This document is Microsoft's AI governance framework (published June 2025), establishing internal and external standards for the responsible development, deployment, and oversight of artificial intelligence systems across Microsoft products and services. The framework creates obligations for Microsoft to implement human oversight mechanisms, conduct impact assessments, enforce prohibited use policies, and maintain transparency with users and regulators regarding AI system capabilities and limitations. Notably, the document engages voluntary commitments alongside regulatory compliance language, which may create ambiguity about the enforceability of stated obligations — a deviation from more binding governance instruments adopted by some industry peers. The framework directly engages the EU AI Act, GDPR, and emerging US federal AI policy guidance from the NIST AI Risk Management Framework, implicating both EU and US regulatory enforcement regimes. Material compliance considerations include whether Microsoft's internal governance controls satisfy the EU AI Act's requirements for high-risk AI systems, and whether transparency and accountability commitments meet FTC unfair or deceptive acts or practices standards applicable to AI-powered products in the US market.

Evidence Provenance
Captured March 5, 2026 06:14 UTC
Document ID CA-D-000004
Version ID CA-V-000004
Wayback Machine View archived versions →
SHA-256 86c75fe503f3ee1225fc3eea0c770bbbdc795490a7a6402a5d9348b80bb540f8
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Cryptographically signed
Institutional Analysis

🔒 Institutional analysis locked

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Upgrade to Professional — $149/mo
Change Timeline
View full version history (0 captures) →
High Severity — 4 provisions
Medium Severity — 4 provisions

Cross-platform context

See how other platforms handle Data Governance for AI Training and Operation and similar clauses.

Compare across platforms →

Applicable Regulations

EU AI Act
European Union
BIPA
Illinois, USA
CCPA/CPRA
California, USA
CFAA
United States Federal
CAN-SPAM
United States Federal
DMA
European Union
DSA
European Union
GDPR
European Union
TCPA
United States Federal
UK GDPR
United Kingdom