Sensitive AI use cases, particularly in law enforcement and surveillance, carry significant civil liberties implications; Microsoft's review process is meant to prevent harmful deployments.
Microsoft's Responsible AI framework sets out the ethical principles β fairness, reliability, privacy, security, inclusiveness, transparency, and accountability β that govern how AI is built and deployed across all Microsoft products used by consumers. While these commitments signal meaningful intent, they are voluntary and do not create legally enforceable rights for individual users, meaning consumers harmed by AI decisions have limited direct recourse under this document alone. You can submit feedback or concerns about Microsoft AI systems through the dedicated responsible AI resources linked at microsoft.com/en-us/ai/responsible-ai.