The Azure legal framework referenced by this hub covers AI-specific products including Azure OpenAI in Foundry Models, Foundry Agent Service, Microsoft Copilot, and related AI developer tools, which may be subject to additional or product-specific terms beyond the master Azure terms.
This analysis describes what Microsoft Azure's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
AI services like Azure OpenAI and Copilot may carry distinct terms governing data use for model training, output ownership, acceptable use, and liability for AI-generated content that differ from standard Azure cloud service terms.
Interpretive note: AI-specific product terms are not reproduced on this index page; applicable terms for each AI service depend on separate product-specific agreements linked from the Azure legal hub, and their specific provisions regarding data use and output ownership require direct review.
Customers using Azure AI services should be aware that AI-specific product terms may govern how input data is processed, whether it may be used to improve models, who owns AI-generated outputs, and what restrictions apply to use cases; these terms may be separate from and in addition to the standard Azure service terms.
How other platforms handle this
ISO/IEC 42001:2023
When you use AI features of the Services, you acknowledge that your inputs may be processed by third-party AI providers. ClickUp may use anonymized and aggregated data derived from your use of the Services to improve and train AI models and features.
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
Monitoring
Microsoft Azure has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
(1) REGULATORY LANDSCAPE: Azure AI services engage the EU AI Act for EU/EEA deployments, particularly for high-risk AI system use cases. The EU AI Act imposes obligations on both providers and deployers of AI systems, meaning Azure customers deploying AI services for regulated use cases (such as employment screening, credit assessment, or healthcare diagnostics) may have compliance obligations independent of Microsoft's own obligations. The FTC also has enforcement authority over deceptive or unfair practices related to AI outputs and disclosures under the FTC Act. (2) GOVERNANCE EXPOSURE: High for regulated-industry customers deploying Azure AI services in high-risk use cases under the EU AI Act. Medium for general enterprise customers. AI-specific acceptable use policies may restrict certain deployment scenarios, and violations could result in service suspension. Data handling terms for AI services, including whether customer data may be used to improve AI models, require careful review as they may differ from standard Azure data processing terms. (3) JURISDICTION FLAGS: EU/EEA customers deploying Azure AI in high-risk categories under the EU AI Act face the most significant exposure. US federal government customers should verify FedRAMP authorization for AI-specific services. California customers should assess CCPA implications for AI systems that process personal data. Healthcare and financial services customers should evaluate AI-specific terms against sector-specific regulatory requirements. (4) CONTRACT AND VENDOR IMPLICATIONS: Procurement teams reviewing Azure AI service agreements should specifically identify: (a) whether customer input data is used for model training and what opt-out mechanisms exist; (b) ownership and licensing terms for AI-generated outputs; (c) acceptable use restrictions and how violations are enforced; and (d) liability allocation for AI output errors or harms. These terms may differ materially from standard cloud service terms and require separate legal review. (5) COMPLIANCE CONSIDERATIONS: Compliance teams should conduct AI-specific risk assessments for each Azure AI service deployment, including EU AI Act risk categorization for EU/EEA contexts. AI governance frameworks should address acceptable use policy compliance, AI output review processes, and contractual terms governing data input and output handling for each Azure AI product in use.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
AI services like Azure OpenAI and Copilot may carry distinct terms governing data use for model training, output ownership, acceptable use, and liability for AI-generated content that differ from standard Azure cloud service terms.
Customers using Azure AI services should be aware that AI-specific product terms may govern how input data is processed, whether it may be used to improve models, who owns AI-generated outputs, and what restrictions apply to use cases; these terms may be separate from and in addition to the standard Azure service terms.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Microsoft Azure.