OpenAI · OpenAI Usage Policies · View original document ↗

Prohibition on Influence Operations and Deceptive AI Personas

Medium severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Recent governance activity OpenAI recorded 5 documented changes in the last 30 days.
Start monitoring updates
Monitor governance changes for OpenAI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.

This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This restriction addresses the specific risk of AI-powered disinformation and political manipulation, areas that are increasingly subject to legislative attention in the EU, UK, and US, and where OpenAI's services could provide significant operational leverage to bad actors.

Interpretive note: The boundary between permissible persuasive content creation and prohibited influence operations may require case-by-case interpretation, particularly for legitimate political communication use cases.

Consumer impact (what this means for users)

The policy directly affects what any user of ChatGPT or OpenAI's API may do with the service, establishing categories of content and behavior that can result in access suspension or termination. Users who violate the prohibited use categories — including generating content that sexualizes minors, assisting with weapons development, or facilitating unauthorized system access — may have their accounts suspended without a defined appeals process described in this document. You can review the full list of prohibited and restricted use categories at openai.com/policies/usage-policies to assess whether your intended use is permitted.

How other platforms handle this

Cohere Medium

Certain use cases, such as violence, hate speech, fraud, and privacy violations, are strictly prohibited. Developers must outline and get approval for their use case to access the Cohere API, understanding the models and limitations.

Midjourney Medium

Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...

Amazon Medium

Fraud and Deception. Attempting to defraud or misrepresent yourself or your services to others, including impersonating individuals or entities. Engaging in phishing, pharming, or other deceptive activities.

See all platforms with this clause type →

Monitoring

OpenAI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
OpenAI prohibits use of its services to build AI personas to conduct covert influence operations, generating content designed for political propaganda or astroturfing campaigns, creating fake social media profiles, and generating content that falsely portrays real people.

— Excerpt from OpenAI's OpenAI Usage Policies

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
OpenAI Usage Policies
Entity
OpenAI
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 12, 2026
Record ID
CA-P-010650
Document ID
CA-D-00753
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
7bc76af79d3d7702e7ce284199b0b15a9dc7dd89f62958bd0823240c00eaab06
Analysis generated
May 11, 2026 12:43 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: OpenAI
Document: OpenAI Usage Policies
Record ID: CA-P-010650
Captured: 2026-05-11 12:43:28 UTC
SHA-256: 7bc76af79d3d7702…
URL: https://conductatlas.com/platform/openai/openai-usage-policies/prohibition-on-influence-operations-and-deceptive-ai-personas/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does OpenAI's Prohibition on Influence Operations and Deceptive AI Personas clause do?

This restriction addresses the specific risk of AI-powered disinformation and political manipulation, areas that are increasingly subject to legislative attention in the EU, UK, and US, and where OpenAI's services could provide significant operational leverage to bad actors.

Is ConductAtlas affiliated with OpenAI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.