Stability AI · Stability AI Acceptable Use Policy · View original document ↗

Political Manipulation and Disinformation Prohibition

Medium severity Low confidence Inferredfromcontext Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Stability AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

The policy prohibits using Stability AI's models to generate content designed to interfere with democratic processes, spread disinformation, or manipulate political opinion through synthetic media or automated content generation.

This analysis describes what Stability AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This prohibition covers a category of AI misuse that is increasingly regulated in the EU and several US states, and it establishes that using Stability AI's tools for electoral interference or coordinated inauthentic behavior violates the policy.

Interpretive note: Exact verbatim text was unavailable due to HTML truncation; the specific scope of the political manipulation prohibition, including any carve-outs for legitimate political speech or journalism, cannot be confirmed without the full document.

Consumer impact (what this means for users)

Users and developers who use Stability AI's models to produce political disinformation, synthetic election-related media, or automated influence campaign content violate this provision and risk access termination; this applies to both individual users and operators who deploy the models in political communication contexts.

How other platforms handle this

Cohere Medium

Certain use cases, such as violence, hate speech, fraud, and privacy violations, are strictly prohibited. Developers must outline and get approval for their use case to access the Cohere API, understanding the models and limitations.

Midjourney Medium

Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation.

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

See all platforms with this clause type →

Monitoring

Stability AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: This provision engages the EU AI Act's provisions on subliminal manipulation and exploitation of vulnerabilities, as well as the EU Code of Practice on Disinformation. California AB 730 and similar state statutes prohibit AI-generated deepfakes in election contexts within defined periods. The FTC's authority over deceptive practices extends to AI-generated political advertising. The Federal Election Commission (FEC) is evaluating disclosure requirements for AI-generated political content in US federal elections. (2) GOVERNANCE EXPOSURE: Medium for consumer platform operators; High for operators in political technology, advertising technology, or media production sectors. Platforms that enable AI-generated political content at scale may face scrutiny from election regulators, FTC enforcement, and EU DSA compliance auditors. (3) JURISDICTION FLAGS: The EU's AI Act and DSA create heightened obligations for platforms operating in EU member states regarding political advertising and synthetic media disclosure. California's election deepfake statute applies within a 120-day window before elections. UK electoral law and Ofcom's Online Safety Act guidance create additional obligations for UK-facing platforms. (4) CONTRACT AND VENDOR IMPLICATIONS: Operators in the political advertising or campaign technology space should assess whether their use cases are permitted under the AUP and should seek written clarification from Stability AI if there is ambiguity. Procurement teams should document their compliance assessment before deploying Stability AI models in any election-related context. (5) COMPLIANCE CONSIDERATIONS: Legal teams for operators in political technology should assess whether their platforms' use of Stability AI models complies with applicable election law in each jurisdiction where they operate. Content disclosure mechanisms for AI-generated political content should be evaluated against applicable regulatory guidance.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over deceptive practices including AI-generated content used in commercial communications that misleads consumers about its origin or authenticity
    File a complaint →
  • State AG
    State attorneys general in California and other jurisdictions enforce election deepfake statutes and consumer protection laws applicable to AI-generated political content
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Stability AI Acceptable Use Policy
Entity
Stability AI
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 12, 2026
Record ID
CA-P-011537
Document ID
CA-D-00772
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
6fe74fd03c821a478b697f38b02deeafcbbb7b9353c5fd3ff39e20c43b1db53c
Analysis generated
May 11, 2026 13:00 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Stability AI
Document: Stability AI Acceptable Use Policy
Record ID: CA-P-011537
Captured: 2026-05-11 13:00:52 UTC
SHA-256: 6fe74fd03c821a47…
URL: https://conductatlas.com/platform/stability-ai/stability-ai-acceptable-use-policy/political-manipulation-and-disinformation-prohibition/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Stability AI's Political Manipulation and Disinformation Prohibition clause do?

This prohibition covers a category of AI misuse that is increasingly regulated in the EU and several US states, and it establishes that using Stability AI's tools for electoral interference or coordinated inauthentic behavior violates the policy.

How does this clause affect you?

Users and developers who use Stability AI's models to produce political disinformation, synthetic election-related media, or automated influence campaign content violate this provision and risk access termination; this applies to both individual users and operators who deploy the models in political communication contexts.

Is ConductAtlas affiliated with Stability AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Stability AI.