This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This is one of the most serious prohibitions in the policy — violations constitute federal crimes and Anthropic has committed to active detection and mandatory reporting to authorities, meaning law enforcement will be notified.
Defense contractors and federal agencies using Claude must find alternatives. Enterprise customers with defense-adjacent business face compliance risk.
Anthropic's AUP directly affects what you can ask Claude to do — violations can result in your account being throttled, suspended, or permanently terminated without prior notice. For users of third-party apps built on Claude, the policy applies equally, meaning the app developer's failure to comply can affect your access too. You can report harmful or inaccurate AI outputs at usersafety@anthropic.com or via the in-product thumbs-down feedback feature.
How other platforms handle this
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...
You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.
Monitoring
Anthropic has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"This includes using our products or services to: Create, distribute, or promote child sexual abuse material ("CSAM"), including AI-generated CSAM; Facilitate the trafficking, sextortion, or any other form of exploitation of a minor; Facilitate minor grooming, including generating content designed to impersonate a minor; Facilitate child abuse of any form, including instructions for how to conceal abuse; Promote or facilitate pedophilic relationships, including via roleplay with the model; Fetishize or sexualize minors, including in fictional settings or via roleplay with the model. Note: We define a minor or child to be any individual under the age of 18 years old, regardless of jurisdiction. When we detect CSAM (including AI-generated CSAM), or coercion or enticement of a minor to engage in sexual activities, we will report to relevant authorities.— Excerpt from Anthropic's Anthropic API Usage Policy
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This is one of the most serious prohibitions in the policy — violations constitute federal crimes and Anthropic has committed to active detection and mandatory reporting to authorities, meaning law enforcement will be notified.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.