Synthesize, or otherwise develop, high-yield explosives or biological, chemical, radiological, or nuclear weapons or their precursors, including modifications to evade detection or medical countermeasures.
The explicit inclusion of 'modifications to evade detection or medical countermeasures' closes a loophole that has existed in some other AI platform policies, making this prohibition among the most comprehensive CBRN restrictions in publicly available AI AUPs.
Anthropic's Usage Policy affects all users by establishing clear boundaries on how Claude can be used, with real consequences including throttling, suspension, or permanent termination of access for violations. The policy's active monitoring by a dedicated Safeguards Team means user inputs may be reviewed, and CSAM-related violations will be reported to law enforcement. You can report harmful, biased, or inaccurate AI outputs directly to usersafety@anthropic.com or via the thumbs-down feedback button in Anthropic's products.