You cannot use OpenAI tools to run fake account networks, generate political propaganda at scale, or create deceptive synthetic media designed to manipulate public opinion or political processes.
If you use ChatGPT or OpenAI's API to generate political content at scale, create fake social media personas, or produce coordinated messaging campaigns, you risk account termination — and potentially regulatory scrutiny depending on the jurisdiction and scale of the activity.
Cross-platform context
See how other platforms handle Prohibition on Influence Operations and Synthetic Media Deception and similar clauses.
Compare across platforms →This provision directly addresses the misuse of generative AI for election interference and large-scale disinformation campaigns — a rapidly evolving area of regulatory focus in the US, EU, and UK.
REGULATORY FRAMEWORK: This provision engages the FTC Act Section 5 (deceptive practices through coordinated inauthentic behavior), the EU Digital Services Act (DSA) Articles 34 and 35 (systemic risk mitigation obligations for VLOPs including disinformation risks), the EU AI Act Article 50 (transparency obligations for AI-generated content), the EU's Code of Practice on Disinformation, and potential FEC regulations where AI-generated political content constitutes an election expenditure. The FTC, EU DSA Coordinator, FEC, and national election authorities are the primary enforcement bodies.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.