Anthropic · Anthropic Usage Policy

High-Risk Use Case Requirements — Mental Health and Crisis Support

High severity
Share 𝕏 Share in Share

What it is

Apps built on Claude that offer mental health or crisis support must include real crisis resources, must not try to replace actual mental health professionals, and must tell users to see a licensed provider for clinical decisions.

Consumer impact (what this means for users)

If you are using a mental health or crisis support app powered by Claude, that app is required to provide real emergency resources and must not position itself as a substitute for professional clinical care — giving you an enforceable baseline of safety protections.

Cross-platform context

See how other platforms handle High-Risk Use Case Requirements — Mental Health and Crisis Support and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

This provision creates specific safety obligations for a rapidly growing category of AI wellness apps, protecting vulnerable users from over-relying on AI in moments of crisis.

View original clause language
Products or services providing crisis support or other emotional, mental, or behavioral health content... Must include appropriate crisis escalation mechanisms and support resources as part of the user experience... Must not facilitate user dependence on the product as a mental health care provider substitute... Must advise users to seek licensed healthcare providers for any clinical diagnostic or treatment decisions.

Institutional analysis (Compliance & legal intelligence)

(1) REGULATORY FRAMEWORK: This provision implicates the FTC Act Section 5 (deceptive health claims), FDA digital health guidance on Software as a Medical Device (SaMD, 21 CFR Part 820), HIPAA 45 CFR §§ 164.502-164.514 for operators handling protected health information, state mental health licensure laws, and the EU AI Act Annex III (high-risk AI in healthcare). The 988 Suicide and Crisis Lifeline obligations under the Mental Health and Substance Use Disorder Parity and Addiction Equity Act may also apply to operators deploying crisis-adjacent tools. (2)

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    The FTC enforces against deceptive health claims by AI mental health platforms, including inadequate crisis resource provision.
    File a complaint →
  • Hhs Ocr
    HHS OCR enforces HIPAA for covered entities and business associates handling protected health information in mental health AI applications.
    File a complaint →

Provision details

Document information
Document
Anthropic Usage Policy
Entity
Anthropic
Document last updated
March 24, 2026
Tracking information
First tracked
March 6, 2026
Last verified
April 28, 2026
Record ID
CA-P-003873
Document ID
CA-D-00013
Evidence Provenance
Source URL
Wayback Machine
SHA-256
fe6f60bf15130bb0c59c7054ad8111501f08769394cd72b598d456d524e13f2e
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Anthropic | Document: Anthropic Usage Policy | Record: CA-P-003873
Captured: 2026-03-06 20:36:08 UTC | SHA-256: fe6f60bf15130bb0…
URL: https://conductatlas.com/platform/anthropic/anthropic-usage-policy/high-risk-use-case-requirements-mental-health-and-crisis-support/
Accessed: April 29, 2026
Classification
Severity
High
Categories

Other provisions in this document