Anthropic · Anthropic Usage Policy

AI Impersonation Prohibition

Medium severity
Share 𝕏 Share in Share 🔒 PDF
Watch Anthropic Get alerts when this provision or policy changes.
Watch — $9.99/mo

Why it matters (compliance & risk perspective)

This prohibition protects consumers from AI-driven deception and directly engages emerging AI transparency laws and FTC guidance on deceptive practices.

Consumer impact (what this means for users)

Anthropic's AUP directly affects what you can ask Claude to do — violations can result in your account being throttled, suspended, or permanently terminated without prior notice. For users of third-party apps built on Claude, the policy applies equally, meaning the app developer's failure to comply can affect your access too. You can report harmful or inaccurate AI outputs at usersafety@anthropic.com or via the in-product thumbs-down feedback feature.

How other platforms handle this

Netflix Medium

4.1. You must be at least 18 years of age, or the age of majority in your province, territory or country, to become a member of the Netflix service. Minors may only use the service under the supervision of an adult.

Snapchat Medium

If you believe that your copyrighted work has been copied in a way that constitutes copyright infringement and is accessible via the Services, please notify Snap's copyright agent as set forth in the Digital Millennium Copyright Act of 1998 (DMCA). For your complaint to be valid under the DMCA, you ...

Shopify Medium

You may not use the Shopify Services to send unsolicited communications, promotions, or advertisements (spam) to users who have not opted in to receive such communications.

See all platforms with this clause type →

This clause could change without notice.

Get alerted when Anthropic updates this policy — with plain-language summaries and severity ratings.

Watch Anthropic Need compliance memos? Professional →
View original clause language
This includes using our products or services to: Impersonate a human by presenting results as human-generated, or using results in a manner intended to convince a natural person that they are communicating with a natural person when they are not.

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union

Provision details

Document information
Document
Anthropic Usage Policy
Entity
Anthropic
Document last updated
April 29, 2026
Tracking information
First tracked
March 6, 2026
Last verified
April 28, 2026
Record ID
CA-P-002570
Document ID
CA-D-00013
Evidence Provenance
Source URL
Wayback Machine
SHA-256
fe6f60bf15130bb0c59c7054ad8111501f08769394cd72b598d456d524e13f2e
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Anthropic | Document: Anthropic Usage Policy | Record: CA-P-002570
Captured: 2026-03-06 20:36:08 UTC | SHA-256: fe6f60bf15130bb0…
URL: https://conductatlas.com/platform/anthropic/anthropic-usage-policy/ai-impersonation-prohibition/
Accessed: May 4, 2026
Classification
Severity
Medium
Categories

Other risks in this policy

Don't miss changes to this clause.

Anthropic has updated this policy before. Get alerted on the next change.

Watch Anthropic

Frequently Asked Questions

What does Anthropic's AI Impersonation Prohibition clause do?

This prohibition protects consumers from AI-driven deception and directly engages emerging AI transparency laws and FTC guidance on deceptive practices.

Is ConductAtlas affiliated with Anthropic?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.