Microsoft commits to building AI systems that are fair, safe, private, inclusive, transparent, and accountable — these are the six guiding principles for all its AI work.
These six principles shape how Microsoft AI products are designed and could affect fairness of outcomes — for example, whether AI hiring tools or credit decisions treat users equitably — but they are not legally enforceable commitments consumers can rely on in court.
Cross-platform context
See how other platforms handle Six Responsible AI Principles and similar clauses.
Compare across platforms →These principles define how Microsoft says it will design and operate AI products, which affects whether AI tools treat users fairly and protect their data.
(1) REGULATORY FRAMEWORK: The six principles map directly to obligations under the EU AI Act (Articles 9-15 on risk management, transparency, and human oversight for high-risk AI), GDPR Art. 5 (principles of data processing including fairness, transparency, accountability), NIST AI RMF (Govern, Map, Measure, Manage functions), and FTC Act Section 5 (deceptive practices if principles are publicly stated but not implemented). The EU AI Office and national DPAs are primary enforcement authorities in the EU; the FTC has primary authority in the US. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.