Claude cannot be used to promote terrorism, support extremist organizations, incite violence against anyone, or spread discrimination based on race, religion, gender, sexuality, or other protected characteristics.
This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
The material support prohibition aligns Claude's AUP with federal terrorism statutes, creating a direct interface between platform policy and criminal law obligations.
Defense contractors and federal agencies using Claude must find alternatives. Enterprise customers with defense-adjacent business face compliance risk.
Users are protected from being exposed to AI-generated extremist content, hate speech, or targeted discrimination through Anthropic's products — and operators who build such functionality face account termination.
How other platforms handle this
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...
You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.
Monitoring
Anthropic has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Incite, facilitate, or promote violent extremism, terrorism, or hateful behavior... Provide material support for organizations or individuals associated with violent extremism, terrorism, or hateful behavior... Promote discriminatory practices or behaviors against individuals or groups on the basis of one or more protected attributes such as race, ethnicity, religion, national origin, gender, sexual orientation, or any other identifying trait.— Excerpt from Anthropic's Anthropic API Usage Policy
(1) REGULATORY FRAMEWORK: This provision engages 18 U.S.C. § 2339B (material support to designated terrorist organizations), 18 U.S.C. § 875 (interstate threats), Title VII of the Civil Rights Act (discriminatory content in employment contexts), the EU Terrorist Content Online Regulation (EU 2021/784, one-hour removal obligation), EU Digital Services Act (DSA, Art. 16 notice-and-action for illegal content), and Section 230 of the Communications Decency Act (47 U.S.C. § 230) as a potential liability shield. (2)
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
The material support prohibition aligns Claude's AUP with federal terrorism statutes, creating a direct interface between platform policy and criminal law obligations.
Users are protected from being exposed to AI-generated extremist content, hate speech, or targeted discrimination through Anthropic's products — and operators who build such functionality face account termination.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.