OpenAI tested whether GPT-4o could be used to run influence operations or create political disinformation, applied restrictions to reduce this risk, but acknowledged that some residual risk remains.
This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
The document's explicit acknowledgment of GPT-4o's potential utility for influence operations, combined with the statement that residual risk remains after mitigation, is relevant to users, journalists, election administrators, and regulators evaluating the deployment of this model in political or civic contexts.
Interpretive note: The precise scope of residual risk acknowledged in the influence operations category was not fully quantified in the available document text; the characterization is based on the document's summary-level disclosures.
Consumers encountering AI-generated political content, synthetic personas, or persuasive messaging should be aware that the system card identifies influence operations as an evaluated risk category for GPT-4o, with restrictions applied but residual risk acknowledged by OpenAI.
How other platforms handle this
Certain use cases, such as violence, hate speech, fraud, and privacy violations, are strictly prohibited. Developers must outline and get approval for their use case to access the Cohere API, understanding the models and limitations.
Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Monitoring
OpenAI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"The system card discloses that GPT-4o was evaluated for its potential to assist influence operations, including the generation of persuasive political content, persona creation, and synthetic media that could be used in disinformation campaigns. Restrictions were applied to limit these capabilities, and the document acknowledges residual risk in this category.— Excerpt from OpenAI's GPT-4o System Card (PDF)
REGULATORY LANDSCAPE: The FTC has authority over deceptive practices involving AI-generated content used for commercial or political persuasion. The EU AI Act includes provisions requiring transparency labeling for AI-generated content and restricts AI systems used for manipulation in political contexts. US federal election law administered by the FEC may engage if AI-generated political advertising is produced using GPT-4o without appropriate disclosure. Several EU member states have additional national laws on electoral integrity and political advertising transparency. GOVERNANCE EXPOSURE: High. The explicit acknowledgment of influence operation capability combined with residual risk creates a documented record of known risk that operators deploying GPT-4o in political, media, or civic engagement contexts must independently address. JURISDICTION FLAGS: EU operators face the EU AI Act's transparency and manipulation prohibition requirements. US operators in electoral contexts should evaluate FEC disclosure requirements and applicable state election laws. Platforms operating in multiple jurisdictions may face conflicting obligations regarding AI-generated political content disclosure. CONTRACT AND VENDOR IMPLICATIONS: Operators deploying GPT-4o for content generation in political, advocacy, or media contexts should review their contractual obligations regarding content authenticity and OpenAI's usage policies, which prohibit use for influence operations but rely on operator enforcement. COMPLIANCE CONSIDERATIONS: Organizations using GPT-4o for any political or civic communication should implement their own content review procedures, maintain records of AI-generated content, and assess applicable political advertising disclosure requirements in their target jurisdictions.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
The document's explicit acknowledgment of GPT-4o's potential utility for influence operations, combined with the statement that residual risk remains after mitigation, is relevant to users, journalists, election administrators, and regulators evaluating the deployment of this model in political or civic contexts.
Consumers encountering AI-generated political content, synthetic personas, or persuasive messaging should be aware that the system card identifies influence operations as an evaluated risk category for GPT-4o, with restrictions applied but residual risk acknowledged by OpenAI.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.