This document explains Microsoft's rules and commitments for how it builds and uses AI technology responsibly. It sets out principles like fairness, transparency, and safety that Microsoft promises to follow when creating AI products. For everyday users, it means Microsoft has made public commitments about how its AI tools are designed to protect your rights and avoid harmful outcomes.
Technical Summary
This document is Microsoft's AI Governance Framework, establishing the company's principles, policies, and operational commitments for the responsible development and deployment of artificial intelligence systems. It creates obligations around fairness, reliability, privacy, security, inclusiveness, transparency, and accountability across Microsoft's AI product lifecycle. Key provisions address human oversight requirements, prohibited AI use cases, data governance standards, and compliance with emerging AI-specific regulations including the EU AI Act. The framework applies to Microsoft's internal teams, enterprise customers deploying Azure AI services, and third-party partners integrating Microsoft AI capabilities, with specific obligations flowing down through commercial contracts.
Institutional Analysis
This framework engages directly with the EU AI Act, GDPR, CCPA, and emerging US federal AI guidance, creating compliance-relevant representations about Microsoft's AI system development and deploymen…
This framework engages directly with the EU AI Act, GDPR, CCPA, and emerging US federal AI guidance, creating compliance-relevant representations about Microsoft's AI system development and deployment practices. Institutional buyers and enterprise compliance teams should note that Microsoft's state…
🔒
Compliance intelligence locked
Regulatory exposure, material risk, and due diligence action items.
Microsoft has identified specific AI applications it will not build or deploy, such as systems designed to manipulate people, enable mass surveillance, or cause significant harm to individuals or groups.
Microsoft commits to ensuring that humans remain in control of significant AI-driven decisions, particularly in high-stakes contexts like healthcare, finance, and legal matters.
Microsoft commits to designing AI systems that treat all people fairly and do not produce biased or discriminatory outcomes based on characteristics like race, gender, age, or disability.
Microsoft explicitly commits to complying with the EU AI Act, which introduces legal requirements for AI systems based on their risk level, including mandatory human oversight, transparency, and conformity assessments for high-risk AI.
Microsoft commits to building privacy protections into AI systems from the ground up, limiting data collection, enabling user controls, and minimising the use of personal data in AI training and operation.
Microsoft commits to being open about how its AI systems work, including disclosing when AI is being used and providing explanations for AI-driven decisions where possible.
Microsoft establishes internal governance bodies, processes, and leadership accountability to ensure its AI principles are actually implemented across the company's products and services.
Microsoft commits to responsible sourcing and management of data used to train AI systems, including measures to avoid using personal data inappropriately and to ensure training datasets do not embed harmful biases.