Microsoft · Responsible AI

AI Transparency Obligation

Medium severity
Share 𝕏 Share in Share

What it is

Microsoft commits that its AI systems will be explainable and honest, will not deceive users into thinking they are human, and will communicate their limitations.

Consumer impact (what this means for users)

This transparency commitment is directly relevant to consumers using AI-powered services that influence decisions about them, but without a corresponding right to explanation or a recourse mechanism, the commitment provides limited practical protection for individuals affected by opaque AI decisions.

How other platforms handle this

Hinge Medium

You have not committed, been convicted of, or pled no contest to any crime involving violence or a threat of violence, or sexual misconduct.

Eventbrite Medium

Our Services are not targeted at children. You must be the legal age of majority where you reside to use the Services.

Chegg Medium

The Services are available to individuals age 13 and over. If you are between the ages of 13 and the age of majority where you live, you must review these Terms of Use with your parent or guardian to confirm that you and your parent or guardian understand and agree to it.

See all platforms with this clause type →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

As AI systems increasingly make or influence consequential decisions, the right to understand why a decision was made is fundamental to consumer protection and fairness.

View original clause language
AI systems should be understandable. We should strive to more effectively explain how AI systems work and what they can and cannot do. Developers should be aware of the limitations of AI systems and be able to communicate those limitations to users. AI should not try to deceive users or obscure its nature as an AI.

Institutional analysis (Compliance & legal intelligence)

(1) REGULATORY FRAMEWORK: EU AI Act Art. 13 mandates transparency for high-risk AI systems, requiring providers to ensure outputs are interpretable and users are informed they are interacting with an AI. GDPR Art. 22(3) requires meaningful information about automated decision-making logic. EU AI Act Art. 50 requires disclosure when consumers interact with AI-generated content or AI chatbots. FTC Act Section 5 applies to deceptive AI representations. California AB 302 (pending) would require AI system transparency disclosures. (2)

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    FTC Act Section 5 applies to deceptive AI practices including AI systems that obscure their nature or make misleading representations to consumers.
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union

Provision details

Document information
Document
Responsible AI
Entity
Microsoft
Document last updated
March 5, 2026
Tracking information
First tracked
March 15, 2026
Last verified
April 9, 2026
Record ID
CA-P-002516
Document ID
CA-D-00003
Evidence Provenance
Source URL
Wayback Machine
SHA-256
de99fca7fd2ebd374c7f5dd22d7ff57569e2321c88c91f75c4f9e17147793b07
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Microsoft | Document: Responsible AI | Record: CA-P-002516
Captured: 2026-03-15 11:09:49 UTC | SHA-256: de99fca7fd2ebd37…
URL: https://conductatlas.com/platform/microsoft/responsible-ai/ai-transparency-obligation/
Accessed: April 29, 2026
Classification
Severity
Medium
Categories

Other provisions in this document