Our Additional Use Case Guidelines apply to certain other use cases, including consumer-facing chatbots, products serving minors, agentic use, and Model Context Protocol servers.
Agentic AI guidelines represent a forward-looking regulatory posture that directly anticipates EU AI Act autonomous system requirements and reflects the novel risks of AI systems that act in the world rather than simply generating text.
Anthropic's Usage Policy affects all users by establishing clear boundaries on how Claude can be used, with real consequences including throttling, suspension, or permanent termination of access for violations. The policy's active monitoring by a dedicated Safeguards Team means user inputs may be reviewed, and CSAM-related violations will be reported to law enforcement. You can report harmful, biased, or inaccurate AI outputs directly to usersafety@anthropic.com or via the thumbs-down feedback button in Anthropic's products.