AI systems should perform reliably and safely. AI systems should work as they were originally designed, respond effectively to unanticipated situations, and resist harmful manipulation. AI systems have the potential to cause harm in ways that traditional software does not.
This principle acknowledges that AI can cause harm in novel ways compared to traditional software, but it does not specify what liability Microsoft accepts when its AI systems fail or cause harm.
This document describes Microsoft's internal ethical framework for AI and does not directly alter consumer data rights, impose fees, or restrict legal recourse — it is a voluntary policy statement. Consumers using Microsoft AI products such as Copilot or Azure OpenAI Service are subject to separate, binding Terms of Service and Privacy Policies that govern data collection, use, and sharing. You can review Microsoft's binding Privacy Statement at https://privacy.microsoft.com to understand what data Microsoft actually collects and how it is used.