Anthropic · Anthropic API Usage Policy · View original document ↗

High-Risk Use Case Requirements — Mental Health and Crisis Support

High severity Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Anthropic Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Apps built on Claude that offer mental health or crisis support must include real crisis resources, must not try to replace actual mental health professionals, and must tell users to see a licensed provider for clinical decisions.

This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision creates specific safety obligations for a rapidly growing category of AI wellness apps, protecting vulnerable users from over-relying on AI in moments of crisis.

Recent Activity

This document changed recently

High Feb 27, 2026

Defense contractors and federal agencies using Claude must find alternatives. Enterprise customers with defense-adjacent business face compliance risk.

Consumer impact (what this means for users)

If you are using a mental health or crisis support app powered by Claude, that app is required to provide real emergency resources and must not position itself as a substitute for professional clinical care — giving you an enforceable baseline of safety protections.

Cross-platform context

See how other platforms handle High-Risk Use Case Requirements — Mental Health and Crisis Support and similar clauses.

Compare across platforms →

Monitoring

Anthropic has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Products or services providing crisis support or other emotional, mental, or behavioral health content... Must include appropriate crisis escalation mechanisms and support resources as part of the user experience... Must not facilitate user dependence on the product as a mental health care provider substitute... Must advise users to seek licensed healthcare providers for any clinical diagnostic or treatment decisions.

— Excerpt from Anthropic's Anthropic API Usage Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY FRAMEWORK: This provision implicates the FTC Act Section 5 (deceptive health claims), FDA digital health guidance on Software as a Medical Device (SaMD, 21 CFR Part 820), HIPAA 45 CFR §§ 164.502-164.514 for operators handling protected health information, state mental health licensure laws, and the EU AI Act Annex III (high-risk AI in healthcare). The 988 Suicide and Crisis Lifeline obligations under the Mental Health and Substance Use Disorder Parity and Addiction Equity Act may also apply to operators deploying crisis-adjacent tools. (2)

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC enforces against deceptive health claims by AI mental health platforms, including inadequate crisis resource provision.
    File a complaint →
  • Hhs Ocr
    HHS OCR enforces HIPAA for covered entities and business associates handling protected health information in mental health AI applications.
    File a complaint →

Provision details

Document information
Document
Anthropic API Usage Policy
Entity
Anthropic
Document last updated
May 11, 2026
Tracking information
First tracked
March 6, 2026
Last verified
April 28, 2026
Record ID
CA-P-003873
Document ID
CA-D-00013
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
fe6f60bf15130bb0c59c7054ad8111501f08769394cd72b598d456d524e13f2e
Analysis generated
March 6, 2026 20:36 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Anthropic
Document: Anthropic API Usage Policy
Record ID: CA-P-003873
Captured: 2026-03-06 20:36:08 UTC
SHA-256: fe6f60bf15130bb0…
URL: https://conductatlas.com/platform/anthropic/anthropic-api-usage-policy/high-risk-use-case-requirements-mental-health-and-crisis-support/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Anthropic's High-Risk Use Case Requirements — Mental Health and Crisis Support clause do?

This provision creates specific safety obligations for a rapidly growing category of AI wellness apps, protecting vulnerable users from over-relying on AI in moments of crisis.

How does this clause affect you?

If you are using a mental health or crisis support app powered by Claude, that app is required to provide real emergency resources and must not position itself as a substitute for professional clinical care — giving you an enforceable baseline of safety protections.

Is ConductAtlas affiliated with Anthropic?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.