Anthropic · Anthropic Usage Policy

Universal Prohibition on Weapons Development

High severity
Share 𝕏 Share in Share 🔒 PDF
Watch Anthropic Get alerts when this provision or policy changes.
Watch — $9.99/mo

Why it matters (compliance & risk perspective)

This prohibition directly prevents misuse of AI for mass-casualty weapon development and exposes violators to serious federal criminal liability.

Consumer impact (what this means for users)

Anthropic's AUP directly affects what you can ask Claude to do — violations can result in your account being throttled, suspended, or permanently terminated without prior notice. For users of third-party apps built on Claude, the policy applies equally, meaning the app developer's failure to comply can affect your access too. You can report harmful or inaccurate AI outputs at usersafety@anthropic.com or via the in-product thumbs-down feedback feature.

How other platforms handle this

Anthropic Claude Medium

You must be at least 18 years old or the minimum age required to consent to use the Services in your location, whichever is higher.

LinkedIn Medium

The Services are not for use by anyone under the age of 16. To use the Services, you agree that: (1) you must be the "Minimum Age" (described below) or older; (2) you will only have one LinkedIn account, which must be in your real name; and (3) you are not already restricted by LinkedIn from using t...

BeReal Medium

BeReal never knowingly or willingly collects any personal data concerning children under 13 years of age. If you are under 13, please do not use BeReal.

See all platforms with this clause type →

This clause could change without notice.

Get alerted when Anthropic updates this policy — with plain-language summaries and severity ratings.

Watch Anthropic Need compliance memos? Professional →
View original clause language
This includes using our products or services to: Produce, modify, design, or illegally acquire weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life; Design or develop weaponization and delivery processes for the deployment of weapons; Circumvent regulatory controls to acquire weapons or their precursors; Synthesize, or otherwise develop, high-yield explosives or biological, chemical, radiological, or nuclear weapons or their precursors, including modifications to evade detection or medical countermeasures.

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union

Provision details

Document information
Document
Anthropic Usage Policy
Entity
Anthropic
Document last updated
April 29, 2026
Tracking information
First tracked
March 6, 2026
Last verified
April 28, 2026
Record ID
CA-P-002568
Document ID
CA-D-00013
Evidence Provenance
Source URL
Wayback Machine
SHA-256
fe6f60bf15130bb0c59c7054ad8111501f08769394cd72b598d456d524e13f2e
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Anthropic | Document: Anthropic Usage Policy | Record: CA-P-002568
Captured: 2026-03-06 20:36:08 UTC | SHA-256: fe6f60bf15130bb0…
URL: https://conductatlas.com/platform/anthropic/anthropic-usage-policy/universal-prohibition-on-weapons-development/
Accessed: May 4, 2026
Classification
Severity
High
Categories

Other risks in this policy

Don't miss changes to this clause.

Anthropic has updated this policy before. Get alerted on the next change.

Watch Anthropic

Frequently Asked Questions

What does Anthropic's Universal Prohibition on Weapons Development clause do?

This prohibition directly prevents misuse of AI for mass-casualty weapon development and exposes violators to serious federal criminal liability.

Is ConductAtlas affiliated with Anthropic?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.