Microsoft · Responsible AI Report 2025

AI Security and Adversarial Robustness

Medium severity
Share 𝕏 Share in Share 🔒 PDF

What it is

Microsoft commits to protecting its AI systems from hacking, manipulation, and data theft — including specific threats unique to AI like feeding false data to corrupt AI outputs.

Consumer impact (what this means for users)

This provision means Microsoft should be actively protecting the AI systems that process your data and make decisions about your interactions from cyberattacks and manipulation — a security failure in an AI system could expose your personal information or cause harmful AI outputs affecting you.

Cross-platform context

See how other platforms handle AI Security and Adversarial Robustness and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

AI systems face unique security threats that could cause them to behave harmfully or unpredictably — security failures in AI could expose your personal data or cause AI systems to make harmful decisions about you.

View original clause language
Microsoft commits to implementing security controls for AI systems to protect against adversarial attacks, model theft, data poisoning, and other AI-specific security threats, and to conducting security testing of AI systems as part of the development and deployment lifecycle.

Institutional analysis (Compliance & legal intelligence)

REGULATORY FRAMEWORK: AI security commitments engage EU AI Act Art. 15 (accuracy, robustness and cybersecurity for high-risk AI systems), NIST AI RMF Govern 1.7 and Manage 4.0 functions on AI security, and general cybersecurity obligations under GDPR Art. 32 (security of processing). In the US, the FTC Act Section 5 applies to inadequate security as an unfair practice (see FTC v. LabMD, 2016). CISA guidance on AI security and the National Cybersecurity Strategy (2023) address AI-specific security requirements. The EU Cyber Resilience Act intersects with AI product security obligations.

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    FTC has authority to enforce against inadequate AI security practices that constitute unfair acts causing consumer harm under Section 5 of the FTC Act.
    File a complaint →

Provision details

Document information
Document
Responsible AI Report 2025
Entity
Microsoft
Document last updated
March 5, 2026
Tracking information
First tracked
March 5, 2026
Last verified
April 27, 2026
Record ID
CA-P-003123
Document ID
CA-D-00004
Evidence Provenance
Source URL
Wayback Machine
SHA-256
99c61ee37f0300e932720498b6db37eb5eaf309ded7c40585a2fd7f70c4ce999
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Microsoft | Document: Responsible AI Report 2025 | Record: CA-P-003123
Captured: 2026-03-05 09:35:48 UTC | SHA-256: 99c61ee37f0300e9…
URL: https://conductatlas.com/platform/microsoft/responsible-ai-report-2025/ai-security-and-adversarial-robustness/
Accessed: May 2, 2026
Classification
Severity
Medium
Categories

Other provisions in this document