OpenAI found that GPT-4o can help bad actors run large-scale influence and disinformation campaigns, and rated this a 'medium' risk under their own framework — not severe enough to block release but serious enough to disclose.
GPT-4o can be used to generate persuasive content at scale for influence operations, meaning everyday users of news, social media, and online information may encounter AI-generated disinformation created with GPT-4o assistance without knowing it.
Cross-platform context
See how other platforms handle Persuasion and Influence Operations Risk and similar clauses.
Compare across platforms →AI-powered influence operations at scale threaten democratic processes, public health information, and social trust — and GPT-4o's rating as 'medium' risk in this category means it is capable of meaningful assistance to such campaigns.
(1) REGULATORY FRAMEWORK: This provision implicates FTC Act Section 5 (deceptive practices involving AI-generated persuasive content), the EU AI Act Article 5(1)(a) (prohibition on subliminal manipulation techniques) and Article 50 (transparency for AI-generated content), the EU's Digital Services Act (DSA) risk assessment obligations for very large online platforms that may host GPT-4o-generated content, and FEC regulations on AI-generated political advertising. The FTC, EU AI Office, and FEC have relevant enforcement authority. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.