Stability AI · Stability AI Acceptable Use Policy · View original document ↗

Weapons Development and Critical Infrastructure Prohibition

High severity Low confidence Inferredfromcontext Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Stability AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

The policy prohibits using Stability AI's models to assist in developing weapons of mass destruction, cyberweapons, or to conduct attacks on critical infrastructure such as power grids, water systems, or financial systems.

This analysis describes what Stability AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This prohibition addresses national security-adjacent use cases and establishes that any attempt to use the AI for weapons development or infrastructure disruption is a policy violation subject to immediate termination and potential referral to law enforcement.

Interpretive note: Exact verbatim policy text was unavailable due to HTML truncation; specific carve-outs for authorized security research or dual-use scenarios cannot be confirmed without the full document.

Consumer impact (what this means for users)

Individual users and enterprise API customers are prohibited from using Stability AI's services for any weapons development, malware creation, or critical infrastructure attack planning; violations of this provision are among the most severe categories and would result in account termination.

How other platforms handle this

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

Mistral AI Medium

Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...

Perplexity AI Medium

You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.

See all platforms with this clause type →

Monitoring

Stability AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: This provision engages US export control regulations under the Export Administration Regulations (EAR) administered by the Bureau of Industry and Security (BIS), OFAC sanctions programs, and the Computer Fraud and Abuse Act (CFAA). In the EU, the AI Act explicitly prohibits AI systems that constitute unacceptable risk through potential for mass harm. UK export control legislation and the Computer Misuse Act are also engaged. The prohibition on assistance with biological, chemical, nuclear, or radiological weapons engages the Biological Weapons Anti-Terrorism Act and equivalent international frameworks. (2) GOVERNANCE EXPOSURE: High for enterprise customers operating in dual-use technology sectors, defense contracting, or cybersecurity research. Even legitimate security research use cases may require careful assessment against the policy's prohibition language, which may not clearly delineate permitted offensive security research from prohibited weaponization. (3) JURISDICTION FLAGS: Export control applicability depends heavily on the nationality of the user and the destination of outputs; US export control law applies extraterritorially in certain circumstances. Enterprise customers with international operations should assess whether their API use triggers EAR licensing requirements. (4) CONTRACT AND VENDOR IMPLICATIONS: Enterprise procurement teams should assess whether the AUP's weapons prohibition creates any tension with legitimate dual-use research or penetration testing activities. The policy may not contain explicit carve-outs for authorized security research, which could create ambiguity for cybersecurity firms using the API. (5) COMPLIANCE CONSIDERATIONS: Enterprise API customers in defense, cybersecurity, or research sectors should document the specific use cases for which they deploy Stability AI models and confirm with Stability AI whether those use cases fall within permitted applications. Legal teams should assess whether their users' activities could be characterized as prohibited under the policy's weapons and infrastructure attack language.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has general consumer protection authority that may apply where AI services are used deceptively in connection with prohibited harmful conduct
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Stability AI Acceptable Use Policy
Entity
Stability AI
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 12, 2026
Record ID
CA-P-011535
Document ID
CA-D-00772
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
6fe74fd03c821a478b697f38b02deeafcbbb7b9353c5fd3ff39e20c43b1db53c
Analysis generated
May 11, 2026 13:00 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Stability AI
Document: Stability AI Acceptable Use Policy
Record ID: CA-P-011535
Captured: 2026-05-11 13:00:52 UTC
SHA-256: 6fe74fd03c821a47…
URL: https://conductatlas.com/platform/stability-ai/stability-ai-acceptable-use-policy/weapons-development-and-critical-infrastructure-prohibition/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Stability AI's Weapons Development and Critical Infrastructure Prohibition clause do?

This prohibition addresses national security-adjacent use cases and establishes that any attempt to use the AI for weapons development or infrastructure disruption is a policy violation subject to immediate termination and potential referral to law enforcement.

How does this clause affect you?

Individual users and enterprise API customers are prohibited from using Stability AI's services for any weapons development, malware creation, or critical infrastructure attack planning; violations of this provision are among the most severe categories and would result in account termination.

Is ConductAtlas affiliated with Stability AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Stability AI.