At Microsoft, we've chosen to focus on six principles that we believe should guide AI development and use: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
These principles set the baseline standard for how Microsoft AI systems that affect your life — from job applications screened by AI to healthcare tools — are supposed to behave, though they are voluntary commitments rather than legal guarantees.
Microsoft's Responsible AI framework sets out the ethical principles — fairness, reliability, privacy, security, inclusiveness, transparency, and accountability — that govern how AI is built and deployed across all Microsoft products used by consumers. While these commitments signal meaningful intent, they are voluntary and do not create legally enforceable rights for individual users, meaning consumers harmed by AI decisions have limited direct recourse under this document alone. You can submit feedback or concerns about Microsoft AI systems through the dedicated responsible AI resources linked at microsoft.com/en-us/ai/responsible-ai.