Anthropic · Anthropic Usage Policy

Psychological and Emotionally Harmful Content Prohibition

Medium severity
Share 𝕏 Share in Share 🔒 PDF
Watch Anthropic Get alerts when this provision or policy changes.
Watch — $9.99/mo

Why it matters (compliance & risk perspective)

This provision is particularly relevant for mental health, companion AI, and consumer chatbot applications where vulnerable users may be exposed to harmful content — it engages the High-Risk Use Case Requirements in the policy.

Consumer impact (what this means for users)

Anthropic's AUP directly affects what you can ask Claude to do — violations can result in your account being throttled, suspended, or permanently terminated without prior notice. For users of third-party apps built on Claude, the policy applies equally, meaning the app developer's failure to comply can affect your access too. You can report harmful or inaccurate AI outputs at usersafety@anthropic.com or via the in-product thumbs-down feedback feature.

How other platforms handle this

Yelp Medium

Yelp does not knowingly collect personal information from children under the age of 13, and our Service is not directed to children. Access or use of the Service by anyone under the age of 13 is not allowed. If you become aware that a child under 13, or under the applicable age of consent, has provi...

Apple Medium

We will reject apps for any content or behavior that we believe is over the line. What line, you ask? Well, as a Supreme Court Justice once said, "I'll know it when I see it." And we think that you will also know it when you cross it. Apps that present excessively violent or offensive content, adult...

Google Medium

If you're a minor in your country, you must have your parent or legal guardian's permission to use our services. Please have your parent or legal guardian read these terms with you. If you're a parent or legal guardian, and you allow your child to use the services, then these terms apply to you and ...

See all platforms with this clause type →

This clause could change without notice.

Get alerted when Anthropic updates this policy — with plain-language summaries and severity ratings.

Watch Anthropic Need compliance memos? Professional →
View original clause language
Do Not Create Psychologically or Emotionally Harmful Content

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union

Provision details

Document information
Document
Anthropic Usage Policy
Entity
Anthropic
Document last updated
April 29, 2026
Tracking information
First tracked
March 6, 2026
Last verified
April 28, 2026
Record ID
CA-P-002575
Document ID
CA-D-00013
Evidence Provenance
Source URL
Wayback Machine
SHA-256
fe6f60bf15130bb0c59c7054ad8111501f08769394cd72b598d456d524e13f2e
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Anthropic | Document: Anthropic Usage Policy | Record: CA-P-002575
Captured: 2026-03-06 20:36:08 UTC | SHA-256: fe6f60bf15130bb0…
URL: https://conductatlas.com/platform/anthropic/anthropic-usage-policy/psychological-and-emotionally-harmful-content-prohibition/
Accessed: May 4, 2026
Classification
Severity
Medium
Categories

Other risks in this policy

Don't miss changes to this clause.

Anthropic has updated this policy before. Get alerted on the next change.

Watch Anthropic

Frequently Asked Questions

What does Anthropic's Psychological and Emotionally Harmful Content Prohibition clause do?

This provision is particularly relevant for mental health, companion AI, and consumer chatbot applications where vulnerable users may be exposed to harmful content — it engages the High-Risk Use Case Requirements in the policy.

Is ConductAtlas affiliated with Anthropic?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.