Microsoft commits to building AI systems that work as intended, behave safely in unexpected situations, and cannot be easily manipulated to cause harm.
This commitment means Microsoft holds itself to a standard of AI reliability and safety, but if an AI system fails and causes harm to a consumer, this document does not establish a legal warranty or create a cause of action against Microsoft.
Cross-platform context
See how other platforms handle Reliability and Safety in AI and similar clauses.
Compare across platforms →Safety failures in AI systems can cause real-world harm — from incorrect medical information to unsafe autonomous decisions — and this commitment sets Microsoft's own standard for what reliable AI should look like.
(1) REGULATORY FRAMEWORK: AI reliability and safety implicates EU AI Act Arts. 9-16 (risk management, accuracy, robustness, and cybersecurity requirements for high-risk AI systems); NIST AI RMF (Map and Measure functions); FTC Act Section 5 (safety-related deceptive practices); product liability law (EU Product Liability Directive, revised 2024, and US common law product liability); and sector-specific safety regulations (FDA guidance on AI/ML-based software as medical devices, NHTSA guidance on autonomous vehicles). The EU AI Office, FDA, NHTSA, and FTC are relevant enforcement authorities. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.