Microsoft commits to building AI systems that keep humans in control of important decisions, rather than allowing AI to operate entirely autonomously in high-stakes situations.
For consumers, this means Microsoft's AI products are designed with human review checkpoints, which is especially important when AI is used in healthcare, legal, or financial contexts.
Human-in-the-loop requirements align with EU AI Act Article 14 provisions for high-risk AI systems; compliance teams deploying Microsoft AI in regulated sectors should verify contractual guarantees of these oversight mechanisms.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.
This document describes Microsoft's self-imposed ethical standards for how AI is developed and deployed in products consumers use daily, including Copilot and Azure AI services. While it does not grant enforceable legal rights, it signals the governance guardrails around AI systems that may affect decisions about your data, content, and interactions. Consumers benefit indirectly from commitments to fairness, human oversight, and privacy-by-design, but have no direct contractual recourse based on this document alone.