OpenAI restricts using its tools to generate large volumes of political messaging, to spread false information about voting, or to create content designed to unduly influence election outcomes.
This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision addresses the use of generative AI in electoral contexts, prohibiting scaled political influence operations and voting misinformation, though the phrase 'unduly influence' introduces interpretive ambiguity about where permissible political expression ends and prohibited influence operations begin.
Interpretive note: The terms 'unduly influence' and 'targeted political messaging at scale' are not defined with precision in the policy, creating interpretive ambiguity about the scope of prohibited electoral content generation.
Users engaged in political campaigns, advocacy organizations, or civic technology projects should assess whether their intended use of OpenAI tools for political content generation falls within the bounds of permissible use, as content that constitutes targeted political messaging at scale or that contains voting misinformation is prohibited under this provision.
How other platforms handle this
Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation.
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
All content on this Internet site ("the delta.com website") is owned or controlled by Delta Air Lines and is protected by worldwide copyright laws.
Monitoring
OpenAI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Don't use AI to generate content that could unduly influence elections, including targeted political messaging, voting misinformation, or political rhetoric at scale.— Excerpt from OpenAI's OpenAI Usage Policies
(1) REGULATORY LANDSCAPE: This provision engages with Federal Election Commission regulations on political advertising disclosures, state election law provisions on campaign communications and voter suppression, and FTC Act Section 5 prohibitions on deceptive practices in commercial contexts. The EU AI Act classifies AI systems used for political advertising and micro-targeting in elections as high-risk or potentially prohibited. Several US states have enacted or proposed AI disclosure requirements for political advertising that intersect with this policy. (2) GOVERNANCE EXPOSURE: Medium. The 'unduly influence' standard is inherently interpretive and not defined with precision in the document, creating compliance ambiguity for political campaigns, advocacy organizations, and civic technology operators who use AI-assisted content generation. The prohibition on 'targeted political messaging at scale' raises questions about what volume and targeting criteria trigger the restriction. (3) JURISDICTION FLAGS: EU operators face heightened obligations under the EU AI Act regarding electoral AI applications and the proposed EU Political Advertising Regulation. Several US states including California and New York have enacted or proposed disclosure requirements for AI-generated political content. Operators in these jurisdictions face a layered compliance environment combining OpenAI's policy with statutory requirements. (4) CONTRACT AND VENDOR IMPLICATIONS: Political campaign operators, civic technology vendors, and advocacy organizations using OpenAI via API should document the legal basis for their content generation activities, implement disclosure mechanisms for AI-generated political content, and assess whether their use cases approach the 'unduly influence' or 'targeted messaging at scale' thresholds defined in the policy. (5) COMPLIANCE CONSIDERATIONS: Operators in the political and civic space should establish written policies defining permissible AI-assisted political content creation; consult with election law counsel regarding applicable disclosure and transparency requirements; implement human review processes for AI-generated political content; and monitor regulatory developments regarding AI in elections across their operating jurisdictions.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision addresses the use of generative AI in electoral contexts, prohibiting scaled political influence operations and voting misinformation, though the phrase 'unduly influence' introduces interpretive ambiguity about where permissible political expression ends and prohibited influence operations begin.
Users engaged in political campaigns, advocacy organizations, or civic technology projects should assess whether their intended use of OpenAI tools for political content generation falls within the bounds of permissible use, as content that constitutes targeted political messaging at scale or that contains voting misinformation is prohibited under this provision.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.