Anthropic · Anthropic API Usage Policy · View original document ↗

Prohibition on Privacy and Identity Compromise

Medium severity Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Anthropic Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

You cannot use Claude to collect people's private data without permission, access health or biometric information unlawfully, or deceive someone into thinking they are talking to a real human rather than an AI.

This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

The explicit inclusion of neural data and the anti-impersonation rule are unusually specific and forward-looking compared to most AI platform AUPs, protecting users against emerging AI-enabled privacy and deception harms.

Recent Activity

This document changed recently

High Feb 27, 2026

Defense contractors and federal agencies using Claude must find alternatives. Enterprise customers with defense-adjacent business face compliance risk.

Consumer impact (what this means for users)

Users are protected from having their biometric, health, or neural data harvested through Claude, and are entitled to know when they are interacting with an AI rather than a human — a right directly enforceable against any operator deploying Claude.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Delete Your Data
    If you believe your private, biometric, or health data has been misused through an Anthropic product, email usersafety@anthropic.com describing the specific data and the nature of the misuse. Anthropic's Safeguards Team will review the report.

How other platforms handle this

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

Mistral AI Medium

Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...

Perplexity AI Medium

You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.

See all platforms with this clause type →

Monitoring

Anthropic has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Violate privacy rights as defined by applicable privacy laws, such as sharing personal information without consent or accessing private data unlawfully... Misuse, collect, solicit, or gain access without permission to private information such as non-public contact details, health data, biometric or neural data (including facial recognition), or confidential or proprietary data... Impersonate a human by presenting results as human-generated, or using results in a manner intended to convince a natural person that they are communicating with a natural person when they are not.

— Excerpt from Anthropic's Anthropic API Usage Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY FRAMEWORK: This provision implicates GDPR Arts. 9 and 22 (special category data including biometric and health data), CCPA § 1798.100 and § 1798.140(o) (sensitive personal information including biometric and health data), Illinois BIPA (740 ILCS 14/1, biometric data), the EU AI Act Art. 5(1)(d) (subliminal manipulation prohibition), FTC Act Section 5 (deceptive AI impersonation), and the SHIELD Act (N.Y. Gen. Bus. Law § 899-bb). Neural data protections are specifically addressed in Colorado's HB 24-1058 and emerging neurological privacy frameworks. (2)

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over deceptive AI impersonation practices and unauthorized collection of sensitive personal data under FTC Act Section 5.
    File a complaint →
  • State AG
    State Attorneys General enforce CCPA, BIPA, and state privacy laws implicated by unauthorized biometric and health data collection.
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Anthropic API Usage Policy
Entity
Anthropic
Document last updated
May 11, 2026
Tracking information
First tracked
March 6, 2026
Last verified
April 28, 2026
Record ID
CA-P-003872
Document ID
CA-D-00013
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
fe6f60bf15130bb0c59c7054ad8111501f08769394cd72b598d456d524e13f2e
Analysis generated
March 6, 2026 20:36 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Anthropic
Document: Anthropic API Usage Policy
Record ID: CA-P-003872
Captured: 2026-03-06 20:36:08 UTC
SHA-256: fe6f60bf15130bb0…
URL: https://conductatlas.com/platform/anthropic/anthropic-api-usage-policy/prohibition-on-privacy-and-identity-compromise/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Anthropic's Prohibition on Privacy and Identity Compromise clause do?

The explicit inclusion of neural data and the anti-impersonation rule are unusually specific and forward-looking compared to most AI platform AUPs, protecting users against emerging AI-enabled privacy and deception harms.

How does this clause affect you?

Users are protected from having their biometric, health, or neural data harvested through Claude, and are entitled to know when they are interacting with an AI rather than a human — a right directly enforceable against any operator deploying Claude.

Is ConductAtlas affiliated with Anthropic?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.