Anthropic's Safeguards Team will implement detection and monitoring to enforce our Usage Policy, so please review this policy carefully before using our products or services. If we learn that you have violated our Usage Policy, we may throttle, suspend, or terminate your access to our products and services. We may also block or modify model outputs when inputs violate our Usage Policy.
Active monitoring of user inputs by a dedicated team means your interactions with Claude are not private from Anthropic, and outputs can be silently modified without user notification — two practices with significant privacy and transparency implications.
Anthropic's Usage Policy affects all users by establishing clear boundaries on how Claude can be used, with real consequences including throttling, suspension, or permanent termination of access for violations. The policy's active monitoring by a dedicated Safeguards Team means user inputs may be reviewed, and CSAM-related violations will be reported to law enforcement. You can report harmful, biased, or inaccurate AI outputs directly to usersafety@anthropic.com or via the thumbs-down feedback button in Anthropic's products.