The policy prohibits using Stability AI's models to assist in developing weapons of mass destruction, cyberweapons, or to conduct attacks on critical infrastructure such as power grids, water systems, or financial systems.
This analysis describes what Stability AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This prohibition addresses national security-adjacent use cases and establishes that any attempt to use the AI for weapons development or infrastructure disruption is a policy violation subject to immediate termination and potential referral to law enforcement.
Interpretive note: Exact verbatim policy text was unavailable due to HTML truncation; specific carve-outs for authorized security research or dual-use scenarios cannot be confirmed without the full document.
Individual users and enterprise API customers are prohibited from using Stability AI's services for any weapons development, malware creation, or critical infrastructure attack planning; violations of this provision are among the most severe categories and would result in account termination.
How other platforms handle this
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...
You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.
Monitoring
Stability AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
(1) REGULATORY LANDSCAPE: This provision engages US export control regulations under the Export Administration Regulations (EAR) administered by the Bureau of Industry and Security (BIS), OFAC sanctions programs, and the Computer Fraud and Abuse Act (CFAA). In the EU, the AI Act explicitly prohibits AI systems that constitute unacceptable risk through potential for mass harm. UK export control legislation and the Computer Misuse Act are also engaged. The prohibition on assistance with biological, chemical, nuclear, or radiological weapons engages the Biological Weapons Anti-Terrorism Act and equivalent international frameworks. (2) GOVERNANCE EXPOSURE: High for enterprise customers operating in dual-use technology sectors, defense contracting, or cybersecurity research. Even legitimate security research use cases may require careful assessment against the policy's prohibition language, which may not clearly delineate permitted offensive security research from prohibited weaponization. (3) JURISDICTION FLAGS: Export control applicability depends heavily on the nationality of the user and the destination of outputs; US export control law applies extraterritorially in certain circumstances. Enterprise customers with international operations should assess whether their API use triggers EAR licensing requirements. (4) CONTRACT AND VENDOR IMPLICATIONS: Enterprise procurement teams should assess whether the AUP's weapons prohibition creates any tension with legitimate dual-use research or penetration testing activities. The policy may not contain explicit carve-outs for authorized security research, which could create ambiguity for cybersecurity firms using the API. (5) COMPLIANCE CONSIDERATIONS: Enterprise API customers in defense, cybersecurity, or research sectors should document the specific use cases for which they deploy Stability AI models and confirm with Stability AI whether those use cases fall within permitted applications. Legal teams should assess whether their users' activities could be characterized as prohibited under the policy's weapons and infrastructure attack language.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This prohibition addresses national security-adjacent use cases and establishes that any attempt to use the AI for weapons development or infrastructure disruption is a policy violation subject to immediate termination and potential referral to law enforcement.
Individual users and enterprise API customers are prohibited from using Stability AI's services for any weapons development, malware creation, or critical infrastructure attack planning; violations of this provision are among the most severe categories and would result in account termination.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Stability AI.