Anthropic · Anthropic API Usage Policy · View original document ↗

Agentic Use and Autonomous Action Guidelines

Medium severity Low confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Anthropic Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Anthropic has specific rules for deployments where Claude operates autonomously or takes actions on behalf of users through connected tools and systems, including through the Model Context Protocol.

This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

Agentic AI systems that can take real-world actions (browsing the web, executing code, managing files, interacting with external services) create qualitatively different risks than conversational AI, and the existence of dedicated guidelines signals that Anthropic recognizes this distinction.

Interpretive note: The provided document text was truncated and did not include the full text of the Additional Use Case Guidelines for agentic use and MCP servers, so the specific obligations in this tier cannot be fully assessed.

Recent Activity

This document changed recently

High Feb 27, 2026

Defense contractors and federal agencies using Claude must find alternatives. Enterprise customers with defense-adjacent business face compliance risk.

Consumer impact (what this means for users)

If you use Claude through a product that connects it to external tools, automated workflows, or real-world systems, additional rules apply to that deployment. These provisions are designed to limit harms that could arise from AI taking autonomous actions on your behalf or in your environment.

How other platforms handle this

ClickUp Medium

When you use AI features of the Services, you acknowledge that your inputs may be processed by third-party AI providers. ClickUp may use anonymized and aggregated data derived from your use of the Services to improve and train AI models and features.

Windsurf Medium

We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...

Dun & Bradstreet Medium

Some of the systems we use to process data are AI Systems. We aggregate data, combine, and generate data, including scores, ratings, and other analytics. TRUSTe Responsible AI Certification (2024)

See all platforms with this clause type →

Monitoring

Anthropic has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Our Additional Use Case Guidelines apply to certain other use cases, including consumer-facing chatbots, products serving minors, agentic use, and Model Context Protocol servers.

— Excerpt from Anthropic's Anthropic API Usage Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: Agentic AI deployments engage the EU AI Act's provisions on high-risk AI systems and general-purpose AI models, particularly where autonomous decision-making affects individuals. FTC Act Section 5 deceptive practices standards apply to automated actions taken without adequate user disclosure. GDPR Articles 13, 14, and 22 on automated decision-making are relevant where agentic systems make consequential choices affecting individuals. MCP server deployments that process personal data create GDPR controller or processor obligations depending on configuration. (2) GOVERNANCE EXPOSURE: High for enterprise operators deploying agentic Claude systems with real-world tool access. The policy's existence of dedicated agentic guidelines indicates elevated risk recognition, but the truncated document does not permit full analysis of the specific agentic restrictions. Operators should obtain and review the complete guidelines before deployment. (3) JURISDICTION FLAGS: EU operators face the highest regulatory exposure for agentic AI under the AI Act, particularly for systems classified as high-risk under Annex III. UK operators should evaluate alignment with ICO guidance on AI and automated decision-making. US federal deployments must comply with OMB AI governance memoranda on autonomous AI systems. (4) CONTRACT AND VENDOR IMPLICATIONS: Operators building agentic products using Claude via API must ensure their own terms of service adequately disclose the autonomous nature of the system to end users. MCP server deployments create third-party integration risks that require vendor due diligence on data flows and action scope. Liability allocation for autonomous AI actions that cause harm should be explicitly addressed in operator agreements. (5) COMPLIANCE CONSIDERATIONS: Operators deploying agentic Claude systems should conduct a specific review of the Additional Use Case Guidelines (which were not fully available in the provided document text) and implement human-in-the-loop controls, audit logging, and action scope limitations proportionate to the risk level of the deployment. Consent mechanisms for autonomous actions should be evaluated against GDPR Article 22 requirements.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over unfair or deceptive practices in automated systems, including agentic AI that takes actions affecting consumers without adequate disclosure
    File a complaint →

Applicable regulations

EU AI Act
European Union
California AB 2013 AI Training Data Transparency
US-CA
Colorado AI Act
US-CO
EU AI Act - High Risk Provisions
EU
GDPR
European Union
Texas AI Act
Texas, USA
Trump Executive Order on AI Policy Framework
US
UK GDPR
United Kingdom

Provision details

Document information
Document
Anthropic API Usage Policy
Entity
Anthropic
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 11, 2026
Record ID
CA-P-009966
Document ID
CA-D-00013
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
60e693438d9f7f47deb8f3bfb819343e26b5fe0eb90d56280568f1dd95ae660f
Analysis generated
May 11, 2026 00:39 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Anthropic
Document: Anthropic API Usage Policy
Record ID: CA-P-009966
Captured: 2026-05-11 00:39:26 UTC
SHA-256: 60e693438d9f7f47…
URL: https://conductatlas.com/platform/anthropic/anthropic-api-usage-policy/agentic-use-and-autonomous-action-guidelines/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Related Analysis

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Anthropic's Agentic Use and Autonomous Action Guidelines clause do?

Agentic AI systems that can take real-world actions (browsing the web, executing code, managing files, interacting with external services) create qualitatively different risks than conversational AI, and the existence of dedicated guidelines signals that Anthropic recognizes this distinction.

How does this clause affect you?

If you use Claude through a product that connects it to external tools, automated workflows, or real-world systems, additional rules apply to that deployment. These provisions are designed to limit harms that could arise from AI taking autonomous actions on your behalf or in your environment.

Is ConductAtlas affiliated with Anthropic?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.