Before launching AI systems, Microsoft must assess what harms they could cause to people and take steps to reduce those harms — and this review continues while the system is in use.
This provision creates a procedural safeguard that should reduce the likelihood of harmful, biased, or unsafe AI features reaching consumers — but the assessments are conducted internally by Microsoft and results are not routinely made public, limiting external verification.
Cross-platform context
See how other platforms handle AI Impact Assessment Requirements and similar clauses.
Compare across platforms →Impact assessments are the primary mechanism for catching harmful AI outcomes before they affect consumers; without rigorous pre-deployment review, biased or unsafe AI systems can cause widespread harm before being corrected.
REGULATORY FRAMEWORK: AI impact assessment requirements engage EU AI Act Art. 9 (risk management system for high-risk AI), Art. 10 (data governance requirements), and Art. 17 (quality management system). GDPR Art. 35 requires data protection impact assessments (DPIAs) for high-risk processing including systematic profiling and large-scale processing of sensitive data. NIST AI RMF Map and Measure functions address impact assessment methodology. UK ICO guidance on AI and data protection also applies to UK deployments.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.