Claude cannot be used to promote terrorism, support extremist organizations, incite violence against anyone, or spread discrimination based on race, religion, gender, sexuality, or other protected characteristics.
Users are protected from being exposed to AI-generated extremist content, hate speech, or targeted discrimination through Anthropic's products — and operators who build such functionality face account termination.
Cross-platform context
See how other platforms handle Prohibition on Incitement of Violence and Hateful Behavior and similar clauses.
Compare across platforms →The material support prohibition aligns Claude's AUP with federal terrorism statutes, creating a direct interface between platform policy and criminal law obligations.
(1) REGULATORY FRAMEWORK: This provision engages 18 U.S.C. § 2339B (material support to designated terrorist organizations), 18 U.S.C. § 875 (interstate threats), Title VII of the Civil Rights Act (discriminatory content in employment contexts), the EU Terrorist Content Online Regulation (EU 2021/784, one-hour removal obligation), EU Digital Services Act (DSA, Art. 16 notice-and-action for illegal content), and Section 230 of the Communications Decency Act (47 U.S.C. § 230) as a potential liability shield. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.