Microsoft updated three discrete elements in its Responsible AI Principles document on March 6, 2026. The introductory call-to-action statement changed from 'Accelerate business growth with trustworthy AI' to 'Build your business with trustworthy AI', and the supporting descriptor was revised to emphasize 'safe and secure AI business practices' rather than risk reduction. A resource reference changed from 'Watch the webinar' to 'Get the e-book'. Finally, language about Copilot security requirements was altered from 'all your existing security and compliance requirements are inherited' to 'all your existing security and compliance requirements are inherited', with minor grammatical adjustments ('When using Copilot' to 'When you're using Copilots'). These are primarily formatting, presentational, and grammatical refinements with no material change to stated policy or obligations.
The updated document reframes Microsoft's messaging around responsible AI adoption. The introductory language now states 'Build your business with trustworthy AI' and describes how 'safe and secure AI business practices can enhance performance and effectively drive impact' rather than focusing on accelerating growth and reducing risk. A resource previously referenced as a webinar is now described as an e-book. Language about Copilot security was clarified to read 'When you're using Copilots at work' instead of 'When using Copilot at work'. These changes are presentational refinements that do not alter the substantive security or compliance commitments stated elsewhere in the principles.
The updated language maintains Microsoft's stated security and compliance commitments while refining how those commitments are presented to business customers. The change does not alter what security requirements are enforced, how data is protected, or what compliance obligations apply to Copilot deployments; it adjusts the marketing framing and resource delivery format.
Reframed from 'Accelerate business growth' to 'Build your business' language without altering underlying commitments
Grammatical clarification from 'When using Copilot' to 'When you're using Copilots at work' with no change to stated security inheritance model
Updated from 'Watch the webinar' to 'Get the e-book' with no change to substantive content availability
This change record describes what was added, removed, or modified in the document. Analysis reflects what the updated agreement states or permits. It does not constitute a legal determination about enforceability. Applicability may vary by jurisdiction. Methodology
This change comprises minor rewordings, grammatical adjustments, and resource format updates to Microsoft's publicly stated Responsible AI Principles. No substantive policy obligations, data handling practices, security commitments, or compliance requirements have been modified. The stated security inheritance model for Copilot deployments remains unchanged. No regulatory compliance action is indicated.
Full compliance analysis
Obligation analysis, escalation trigger, board language, and recommended action.
Watcher: regulatory citations + obligations. Professional: full compliance memo.
ConductAtlas provides verified policy intelligence sourced directly from platform documents. All analysis is intended to support, not replace, legal and compliance review. Record CA-C-001841.
See the full side-by-side comparison of every sentence added, removed, and modified.
🔒 Full diff — WatcherMicrosoft updated three passages in its Responsible AI Principles on April 19, 2026. The first change revised the opening tagline …
Microsoft updated three phrases in its Responsible AI document. The opening tagline changed from 'Build your business with trustworthy AI' …
Microsoft modified its data retention policy language on April 19, 2026. Previously, the policy described specific retention criteria including whether …
Get alerted when this policy changes again — including what changed and why it matters.
Prefer a weekly summary instead?
Get the biggest policy changes across 320+ platforms every Sunday.