Anthropic may enter into contracts with certain governmental customers that tailor use restrictions to that customer's public mission and legal authorities if, in Anthropic's judgment, the contractual use restrictions and applicable safeguards are adequate to mitigate the potential harms addressed by this Usage Policy.
This carve-out means the Universal Usage Standards are not truly universal — government customers may be permitted to use Claude for activities that would result in suspension or termination for private users, with Anthropic as the sole arbiter of adequacy.
Anthropic's Usage Policy affects all users by establishing clear boundaries on how Claude can be used, with real consequences including throttling, suspension, or permanent termination of access for violations. The policy's active monitoring by a dedicated Safeguards Team means user inputs may be reviewed, and CSAM-related violations will be reported to law enforcement. You can report harmful, biased, or inaccurate AI outputs directly to usersafety@anthropic.com or via the thumbs-down feedback button in Anthropic's products.