Microsoft commits to protecting its AI systems from hacking, manipulation, and data theft — including specific threats unique to AI like feeding false data to corrupt AI outputs.
This provision means Microsoft should be actively protecting the AI systems that process your data and make decisions about your interactions from cyberattacks and manipulation — a security failure in an AI system could expose your personal information or cause harmful AI outputs affecting you.
Cross-platform context
See how other platforms handle AI Security and Adversarial Robustness and similar clauses.
Compare across platforms →AI systems face unique security threats that could cause them to behave harmfully or unpredictably — security failures in AI could expose your personal data or cause AI systems to make harmful decisions about you.
REGULATORY FRAMEWORK: AI security commitments engage EU AI Act Art. 15 (accuracy, robustness and cybersecurity for high-risk AI systems), NIST AI RMF Govern 1.7 and Manage 4.0 functions on AI security, and general cybersecurity obligations under GDPR Art. 32 (security of processing). In the US, the FTC Act Section 5 applies to inadequate security as an unfair practice (see FTC v. LabMD, 2016). CISA guidance on AI security and the National Cybersecurity Strategy (2023) address AI-specific security requirements. The EU Cyber Resilience Act intersects with AI product security obligations.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.