Anthropic · Anthropic Usage Policy

Reporting Mechanism for Harmful Outputs

Low severity
Share 𝕏 Share in Share 🔒 PDF

What it is

If Claude says something harmful, wrong, or biased, you can report it by emailing usersafety@anthropic.com or clicking the thumbs-down button in the product.

Consumer impact (what this means for users)

You have a named email address (usersafety@anthropic.com) and an in-product reporting button available to flag harmful, biased, or inaccurate Claude outputs — providing a direct line to Anthropic's Safeguards Team.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Delete Your Data
    Email usersafety@anthropic.com describing the harmful, biased, or inaccurate output you encountered, including the context of your request. Alternatively, use the thumbs-down button in the Claude product interface to report the specific response directly.

Cross-platform context

See how other platforms handle Reporting Mechanism for Harmful Outputs and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

This provision establishes a concrete, publicly available safety reporting channel — a basic but important transparency mechanism that gives users meaningful recourse when they encounter harmful AI outputs.

View original clause language
If you believe that our model outputs are potentially inaccurate, biased or harmful, please notify us at usersafety@anthropic.com, or report it directly in our product through the "report issues" thumbs down button or similar feedback features (where available). You can read more about our Safeguards practices and recommendations in our Safeguards Support Center.

Institutional analysis (Compliance & legal intelligence)

(1) REGULATORY FRAMEWORK: This provision engages the EU AI Act Art. 86 (right to explanation and complaint mechanisms for high-risk AI), the EU DSA Art. 16 (notice-and-action mechanisms for illegal content), and FTC guidance on adequate consumer recourse for AI products. It also implicates GDPR Art. 77 (right to lodge a complaint with supervisory authority) as a parallel but independent mechanism. The provision partially addresses EU AI Act Art. 14(4) transparency requirements for human oversight contact points. (2)

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    The FTC has authority over deceptive or unfair AI practices and can act if Anthropic's harm reporting mechanism is found to be inadequate or non-responsive.
    File a complaint →

Provision details

Document information
Document
Anthropic Usage Policy
Entity
Anthropic
Document last updated
April 29, 2026
Tracking information
First tracked
March 6, 2026
Last verified
April 28, 2026
Record ID
CA-P-003877
Document ID
CA-D-00013
Evidence Provenance
Source URL
Wayback Machine
SHA-256
fe6f60bf15130bb0c59c7054ad8111501f08769394cd72b598d456d524e13f2e
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Anthropic | Document: Anthropic Usage Policy | Record: CA-P-003877
Captured: 2026-03-06 20:36:08 UTC | SHA-256: fe6f60bf15130bb0…
URL: https://conductatlas.com/platform/anthropic/anthropic-usage-policy/reporting-mechanism-for-harmful-outputs/
Accessed: May 2, 2026
Classification
Severity
Low
Categories

Other provisions in this document