Microsoft · Responsible AI

Privacy and Security in AI Commitment

Medium severity
Share 𝕏 Share in Share

What it is

Microsoft commits that its AI systems will protect users' personal information, apply privacy-by-design principles, and handle data only in appropriate ways.

Consumer impact (what this means for users)

This provision affects how Microsoft AI processes personal data from users of products like Copilot and Azure AI, but the phrase 'appropriate ways' is undefined, leaving significant discretion to Microsoft regarding what constitutes acceptable use of personal information in AI contexts.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Export Your Data
    Visit account.microsoft.com/privacy to access Microsoft's privacy dashboard, where you can view, download, and delete personal data Microsoft holds about you including data processed by AI services.

How other platforms handle this

Netflix Medium

Netflix operates from the United States and relies on a number of legal mechanisms to transfer personal information from the European Economic Area (EEA), United Kingdom, Switzerland, and other countries to the United States or other countries. In particular, Netflix uses standard contractual clause...

Calm Medium

In connection with any merger, sale of company assets, financing or acquisition of all or a portion of our business by another company;

Betterment Medium

For joint marketing with other financial companies - To offer our products and services to you. Our joint marketing partners include our banking partner, nbkc bank, and other financial services companies.

See all platforms with this clause type →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

AI systems process vast amounts of personal data, and this commitment is relevant to how Microsoft's AI products handle sensitive information, but the standards referenced are vague and no specific data subject rights are granted here.

View original clause language
AI systems should be secure and respect privacy. AI should be developed in a manner that allows people to trust that their personal information will be managed in accordance with privacy standards and used only in appropriate ways. Developers working with AI must protect personal information and apply privacy by design principles.

Institutional analysis (Compliance & legal intelligence)

(1) REGULATORY FRAMEWORK: GDPR Art. 5 (data processing principles), Art. 25 (data protection by design and by default), and Art. 22 (automated decision-making) are directly implicated. CCPA/CPRA §1798.100 et seq. governs California residents' rights over personal data used in AI systems. HIPAA 45 CFR Part 164 applies where Microsoft AI processes protected health information. EU AI Act Art. 10 requires data governance measures for personal data used in high-risk AI training and operation. Enforcement authorities include EU DPAs (GDPR), California Privacy Protection Agency (CCPA/CPRA), and HHS OCR (HIPAA). (2)

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    FTC Act Section 5 applies to unfair or deceptive data practices by AI companies, and the FTC has authority to enforce privacy commitments made in public-facing corporate policies.
    File a complaint →
  • Hhs Ocr
    HHS OCR has jurisdiction where Microsoft AI products process protected health information under HIPAA, including AI-powered healthcare analytics and clinical decision support tools.
    File a complaint →

Applicable regulations

BIPA
Illinois, USA
CCPA/CPRA
California, USA
COPPA
United States Federal
CAN-SPAM
United States Federal
DMA
European Union
FCRA
United States Federal
GDPR
European Union
GLBA
United States Federal
HIPAA
United States Federal
UK GDPR
United Kingdom

Provision details

Document information
Document
Responsible AI
Entity
Microsoft
Document last updated
March 5, 2026
Tracking information
First tracked
March 15, 2026
Last verified
April 9, 2026
Record ID
CA-P-002515
Document ID
CA-D-00003
Evidence Provenance
Source URL
Wayback Machine
SHA-256
de99fca7fd2ebd374c7f5dd22d7ff57569e2321c88c91f75c4f9e17147793b07
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Microsoft | Document: Responsible AI | Record: CA-P-002515
Captured: 2026-03-15 11:09:49 UTC | SHA-256: de99fca7fd2ebd37…
URL: https://conductatlas.com/platform/microsoft/responsible-ai/privacy-and-security-in-ai-commitment/
Accessed: April 29, 2026
Classification
Severity
Medium
Categories

Other provisions in this document

Related Analysis