Our High-Risk Use Case Requirements apply to specific consumer-facing use cases that pose an elevated risk of harm.
This tiered compliance structure creates differential obligations for operators in regulated industries, meaning businesses in healthcare, law, and finance face significantly greater compliance burdens when deploying Claude-powered products.
Anthropic's Usage Policy affects all users by establishing clear boundaries on how Claude can be used, with real consequences including throttling, suspension, or permanent termination of access for violations. The policy's active monitoring by a dedicated Safeguards Team means user inputs may be reviewed, and CSAM-related violations will be reported to law enforcement. You can report harmful, biased, or inaccurate AI outputs directly to usersafety@anthropic.com or via the thumbs-down feedback button in Anthropic's products.