Create, distribute, or promote child sexual abuse material ('CSAM'), including AI-generated CSAM; Facilitate the trafficking, sextortion, or any other form of exploitation of a minor; Facilitate minor grooming, including generating content designed to impersonate a minor; Fetishize or sexualize minors, including in fictional settings or via roleplay with the model.
The explicit prohibition extends to fictional and roleplay contexts, closing creative framing loopholes, and the mandatory reporting commitment creates real law enforcement consequences for violations.
Anthropic's Usage Policy affects all users by establishing clear boundaries on how Claude can be used, with real consequences including throttling, suspension, or permanent termination of access for violations. The policy's active monitoring by a dedicated Safeguards Team means user inputs may be reviewed, and CSAM-related violations will be reported to law enforcement. You can report harmful, biased, or inaccurate AI outputs directly to usersafety@anthropic.com or via the thumbs-down feedback button in Anthropic's products.