If Claude says something harmful, wrong, or biased, you can report it by emailing usersafety@anthropic.com or clicking the thumbs-down button in the product.
You have a named email address (usersafety@anthropic.com) and an in-product reporting button available to flag harmful, biased, or inaccurate Claude outputs — providing a direct line to Anthropic's Safeguards Team.
Cross-platform context
See how other platforms handle Reporting Mechanism for Harmful Outputs and similar clauses.
Compare across platforms →This provision establishes a concrete, publicly available safety reporting channel — a basic but important transparency mechanism that gives users meaningful recourse when they encounter harmful AI outputs.
(1) REGULATORY FRAMEWORK: This provision engages the EU AI Act Art. 86 (right to explanation and complaint mechanisms for high-risk AI), the EU DSA Art. 16 (notice-and-action mechanisms for illegal content), and FTC guidance on adequate consumer recourse for AI products. It also implicates GDPR Art. 77 (right to lodge a complaint with supervisory authority) as a parallel but independent mechanism. The provision partially addresses EU AI Act Art. 14(4) transparency requirements for human oversight contact points. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.