This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This restriction addresses the specific risk of AI-powered disinformation and political manipulation, areas that are increasingly subject to legislative attention in the EU, UK, and US, and where OpenAI's services could provide significant operational leverage to bad actors.
Interpretive note: The boundary between permissible persuasive content creation and prohibited influence operations may require case-by-case interpretation, particularly for legitimate political communication use cases.
The policy directly affects what any user of ChatGPT or OpenAI's API may do with the service, establishing categories of content and behavior that can result in access suspension or termination. Users who violate the prohibited use categories — including generating content that sexualizes minors, assisting with weapons development, or facilitating unauthorized system access — may have their accounts suspended without a defined appeals process described in this document. You can review the full list of prohibited and restricted use categories at openai.com/policies/usage-policies to assess whether your intended use is permitted.
How other platforms handle this
Certain use cases, such as violence, hate speech, fraud, and privacy violations, are strictly prohibited. Developers must outline and get approval for their use case to access the Cohere API, understanding the models and limitations.
Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...
Fraud and Deception. Attempting to defraud or misrepresent yourself or your services to others, including impersonating individuals or entities. Engaging in phishing, pharming, or other deceptive activities.
Monitoring
OpenAI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"OpenAI prohibits use of its services to build AI personas to conduct covert influence operations, generating content designed for political propaganda or astroturfing campaigns, creating fake social media profiles, and generating content that falsely portrays real people.— Excerpt from OpenAI's OpenAI Usage Policies
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This restriction addresses the specific risk of AI-powered disinformation and political manipulation, areas that are increasingly subject to legislative attention in the EU, UK, and US, and where OpenAI's services could provide significant operational leverage to bad actors.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.