At Microsoft, we've committed to the responsible development and deployment of AI and have established six core principles to guide our work: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
These principles define Microsoft's stated ethical floor for AI development, but they are voluntary and not legally enforceable by consumers — meaning there is no formal redress mechanism if Microsoft's products fall short.
This document describes Microsoft's internal ethical framework for AI and does not directly alter consumer data rights, impose fees, or restrict legal recourse — it is a voluntary policy statement. Consumers using Microsoft AI products such as Copilot or Azure OpenAI Service are subject to separate, binding Terms of Service and Privacy Policies that govern data collection, use, and sharing. You can review Microsoft's binding Privacy Statement at https://privacy.microsoft.com to understand what data Microsoft actually collects and how it is used.