Anthropic · Anthropic API Usage Policy · View original document ↗

High-Risk Use Case Requirements

High severity Low confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Anthropic Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Certain high-stakes uses of Claude, such as mental health support, medical advice, or crisis services, require operators to meet additional safety standards beyond the baseline rules.

This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

The existence of a separate, elevated tier for high-risk consumer-facing use cases signals that Anthropic recognizes some deployments create heightened risk of harm to vulnerable individuals, and operators in those spaces face stricter compliance obligations.

Interpretive note: The provided document text was truncated and did not include the full text of the High-Risk Use Case Requirements, so the specific additional obligations in this tier cannot be assessed.

Recent Activity

This document changed recently

High Feb 27, 2026

Defense contractors and federal agencies using Claude must find alternatives. Enterprise customers with defense-adjacent business face compliance risk.

Consumer impact (what this means for users)

If you use a Claude-powered product for mental health support, medical information, or similar sensitive purposes, that product should be operating under stricter safety requirements than a general-purpose deployment. You can expect additional safeguards in those contexts, though the specific requirements depend on the operator's compliance.

Cross-platform context

See how other platforms handle High-Risk Use Case Requirements and similar clauses.

Compare across platforms →

Monitoring

Anthropic has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Our High-Risk Use Case Requirements apply to specific consumer-facing use cases that pose an elevated risk of harm.

— Excerpt from Anthropic's Anthropic API Usage Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: High-risk consumer-facing deployments likely engage HIPAA where health information is involved, FTC Act Section 5 for deceptive health claims, FDA regulations where AI-generated content constitutes medical device output, and state-level telehealth and mental health platform regulations. In the EU, the AI Act classifies certain health and safety-related AI systems as high-risk under Annex III, triggering mandatory conformity assessment obligations. (2) GOVERNANCE EXPOSURE: High for operators in healthcare, mental health, crisis support, and similar verticals. The tiered policy structure creates a compliance gap risk: operators who deploy in high-risk categories without implementing the additional requirements may face enforcement action from Anthropic as well as regulatory exposure from applicable sector regulators. The specific requirements of this tier were not fully available in the provided document text. (3) JURISDICTION FLAGS: Healthcare AI deployments face the highest regulatory complexity across US state telehealth laws, federal HIPAA and FTC Health Breach Notification Rule, and EU AI Act high-risk classification. Mental health platforms serving minors face additional obligations under COPPA and state-specific minor mental health laws. Crisis support deployments must evaluate applicable duty-of-care standards. (4) CONTRACT AND VENDOR IMPLICATIONS: Operators in high-risk verticals must obtain and implement the specific High-Risk Use Case Requirements before deployment. API agreements with Anthropic should confirm whether attestation or certification of compliance with these requirements is required. Downstream liability for harm arising from non-compliant high-risk deployments should be assessed in vendor and operator agreements. (5) COMPLIANCE CONSIDERATIONS: Operators considering high-risk deployments should conduct a specific review of the full High-Risk Use Case Requirements document, which was not fully available in the provided text. Legal teams should assess whether the additional requirements satisfy applicable sector-specific regulatory standards independently, as policy compliance does not necessarily constitute regulatory compliance.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over deceptive health and safety claims in consumer-facing AI products and enforces the Health Breach Notification Rule
    File a complaint →
  • Hhs Ocr
    HHS OCR has authority over HIPAA compliance where Claude deployments involve protected health information in covered entity or business associate contexts
    File a complaint →

Provision details

Document information
Document
Anthropic API Usage Policy
Entity
Anthropic
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 11, 2026
Record ID
CA-P-009967
Document ID
CA-D-00013
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
60e693438d9f7f47deb8f3bfb819343e26b5fe0eb90d56280568f1dd95ae660f
Analysis generated
May 11, 2026 00:39 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Anthropic
Document: Anthropic API Usage Policy
Record ID: CA-P-009967
Captured: 2026-05-11 00:39:26 UTC
SHA-256: 60e693438d9f7f47…
URL: https://conductatlas.com/platform/anthropic/anthropic-api-usage-policy/high-risk-use-case-requirements/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Anthropic's High-Risk Use Case Requirements clause do?

The existence of a separate, elevated tier for high-risk consumer-facing use cases signals that Anthropic recognizes some deployments create heightened risk of harm to vulnerable individuals, and operators in those spaces face stricter compliance obligations.

How does this clause affect you?

If you use a Claude-powered product for mental health support, medical information, or similar sensitive purposes, that product should be operating under stricter safety requirements than a general-purpose deployment. You can expect additional safeguards in those contexts, though the specific requirements depend on the operator's compliance.

Is ConductAtlas affiliated with Anthropic?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.