OpenAI · Usage Policies · View original document ↗

Election Disinformation Prohibition

Medium severity Medium confidence Inferredfromcontext Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Recent governance activity OpenAI recorded 7 documented changes in the last 30 days.
Start monitoring updates
Monitor governance changes for OpenAI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

The policy prohibits using OpenAI services to create content designed to undermine elections, including generating disinformation, fabricating statements by real candidates, or building tools to suppress voter participation.

This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This restriction applies to all users globally and covers both direct content generation and the development of tools designed to interfere with democratic processes.

Interpretive note: Verbatim text could not be extracted from the binary PDF. The provision is inferred from document metadata and publicly available OpenAI Usage Policy language consistent with this document version. The exact scope of permissible political commentary versus prohibited disinformation is not fully defined.

Consumer impact (what this means for users)

Users who attempt to use ChatGPT or the API to generate election-related disinformation, fabricate candidate statements, or build voter suppression tools are in direct violation of this policy and subject to account termination.

How other platforms handle this

Cohere Medium

Certain use cases, such as violence, hate speech, fraud, and privacy violations, are strictly prohibited. Developers must outline and get approval for their use case to access the Cohere API, understanding the models and limitations.

Midjourney Medium

Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

See all platforms with this clause type →

Monitoring

OpenAI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

1. REGULATORY LANDSCAPE: This provision engages the Federal Election Campaign Act (FECA) and FEC regulations on political advertising and disinformation. State election laws, particularly in jurisdictions with AI-specific deepfake disclosure requirements (California, Texas, Minnesota), are also implicated. In the EU, the revised Code of Practice on Disinformation and the Digital Services Act (DSA) create platform-level obligations regarding electoral disinformation. The Electoral Integrity Partnership and national electoral commissions are relevant oversight bodies. 2. GOVERNANCE EXPOSURE: Medium to High. For operators deploying AI in political communication, media, or civic technology contexts, this provision creates significant compliance exposure. The boundary between permissible political commentary and prohibited disinformation may require case-by-case legal assessment. 3. JURISDICTION FLAGS: US operators face FEC and state election law exposure. EU operators face DSA obligations and national electoral integrity regulations. Jurisdictions with AI deepfake disclosure laws (California AB 602, Texas SB 751, Minnesota SF 3274) create heightened exposure for any operator generating synthetic media depicting real political figures. 4. CONTRACT AND VENDOR IMPLICATIONS: Political campaign technology vendors, media companies, and civic technology firms should review their API use cases against this prohibition and seek legal counsel on whether their specific use cases fall within permissible bounds. Contracts with political clients should include representations about compliance with this provision. 5. COMPLIANCE CONSIDERATIONS: Operators in the political technology or media sectors should implement specific content moderation for election-related outputs. Legal teams should monitor state-level AI disclosure requirements that may impose additional obligations beyond this policy.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over deceptive practices by technology platforms, including failure to enforce stated content moderation policies
    File a complaint →
  • State AG
    State attorneys general have jurisdiction over state election law violations and AI deepfake disclosure requirements in jurisdictions such as California, Texas, and Minnesota
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Usage Policies
Entity
OpenAI
Document last updated
March 5, 2026
Tracking information
First tracked
March 10, 2026
Last verified
May 12, 2026
Record ID
CA-P-011459
Document ID
CA-D-00005
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
d69a24617758e5b44e4be8eedeceb598a26dc4e280f2ab1469a45b64203e7403
Analysis generated
March 10, 2026 03:28 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: OpenAI
Document: Usage Policies
Record ID: CA-P-011459
Captured: 2026-03-10 03:28:59 UTC
SHA-256: d69a24617758e5b4…
URL: https://conductatlas.com/platform/openai/usage-policies/election-disinformation-prohibition/
Accessed: May 15, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does OpenAI's Election Disinformation Prohibition clause do?

This restriction applies to all users globally and covers both direct content generation and the development of tools designed to interfere with democratic processes.

How does this clause affect you?

Users who attempt to use ChatGPT or the API to generate election-related disinformation, fabricate candidate statements, or build voter suppression tools are in direct violation of this policy and subject to account termination.

Is ConductAtlas affiliated with OpenAI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.