To put our principles into practice, Microsoft has established internal governance bodies including the Aether Committee — an advisory body of senior leaders and researchers — and the Office of Responsible AI, which sets the rules and processes for responsible AI at Microsoft.
The existence of named governance bodies creates an accountability structure that regulators and the public can reference — and their effectiveness (or lack thereof) will determine whether Microsoft's AI commitments are operationalized or remain aspirational.
Microsoft's Responsible AI framework sets out the ethical principles — fairness, reliability, privacy, security, inclusiveness, transparency, and accountability — that govern how AI is built and deployed across all Microsoft products used by consumers. While these commitments signal meaningful intent, they are voluntary and do not create legally enforceable rights for individual users, meaning consumers harmed by AI decisions have limited direct recourse under this document alone. You can submit feedback or concerns about Microsoft AI systems through the dedicated responsible AI resources linked at microsoft.com/en-us/ai/responsible-ai.