Microsoft commits that its AI systems will be explainable and honest, will not deceive users into thinking they are human, and will communicate their limitations.
This transparency commitment is directly relevant to consumers using AI-powered services that influence decisions about them, but without a corresponding right to explanation or a recourse mechanism, the commitment provides limited practical protection for individuals affected by opaque AI decisions.
How other platforms handle this
You have not committed, been convicted of, or pled no contest to any crime involving violence or a threat of violence, or sexual misconduct.
Our Services are not targeted at children. You must be the legal age of majority where you reside to use the Services.
The Services are available to individuals age 13 and over. If you are between the ages of 13 and the age of majority where you live, you must review these Terms of Use with your parent or guardian to confirm that you and your parent or guardian understand and agree to it.
As AI systems increasingly make or influence consequential decisions, the right to understand why a decision was made is fundamental to consumer protection and fairness.
(1) REGULATORY FRAMEWORK: EU AI Act Art. 13 mandates transparency for high-risk AI systems, requiring providers to ensure outputs are interpretable and users are informed they are interacting with an AI. GDPR Art. 22(3) requires meaningful information about automated decision-making logic. EU AI Act Art. 50 requires disclosure when consumers interact with AI-generated content or AI chatbots. FTC Act Section 5 applies to deceptive AI representations. California AB 302 (pending) would require AI system transparency disclosures. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.