OpenAI · Usage Policies

Prohibition on Influence Operations and Synthetic Media Deception

High severity
Share 𝕏 Share in Share

What it is

You cannot use OpenAI tools to run fake account networks, generate political propaganda at scale, or create deceptive synthetic media designed to manipulate public opinion or political processes.

Consumer impact (what this means for users)

If you use ChatGPT or OpenAI's API to generate political content at scale, create fake social media personas, or produce coordinated messaging campaigns, you risk account termination — and potentially regulatory scrutiny depending on the jurisdiction and scale of the activity.

Cross-platform context

See how other platforms handle Prohibition on Influence Operations and Synthetic Media Deception and similar clauses.

Compare across platforms →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

This provision directly addresses the misuse of generative AI for election interference and large-scale disinformation campaigns — a rapidly evolving area of regulatory focus in the US, EU, and UK.

View original clause language
Don't use our services to create coordinated inauthentic behavior or influence operations, including creating fake personas, generating propaganda designed to influence political discourse, or creating fake social media profiles.

Institutional analysis (Compliance & legal intelligence)

REGULATORY FRAMEWORK: This provision engages the FTC Act Section 5 (deceptive practices through coordinated inauthentic behavior), the EU Digital Services Act (DSA) Articles 34 and 35 (systemic risk mitigation obligations for VLOPs including disinformation risks), the EU AI Act Article 50 (transparency obligations for AI-generated content), the EU's Code of Practice on Disinformation, and potential FEC regulations where AI-generated political content constitutes an election expenditure. The FTC, EU DSA Coordinator, FEC, and national election authorities are the primary enforcement bodies.

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    FTC has authority over coordinated inauthentic behavior and AI-enabled deceptive practices under FTC Act Section 5, particularly where such behavior harms consumers or distorts markets.
    File a complaint →
  • State AG
    State attorneys general in California and other states with AI disclosure laws for political content have enforcement authority over influence operations that violate state election and consumer protection statutes.
    File a complaint →

Provision details

Document information
Document
Usage Policies
Entity
OpenAI
Document last updated
March 5, 2026
Tracking information
First tracked
March 10, 2026
Last verified
April 27, 2026
Record ID
CA-P-003128
Document ID
CA-D-00005
Evidence Provenance
Source URL
Wayback Machine
SHA-256
d69a24617758e5b44e4be8eedeceb598a26dc4e280f2ab1469a45b64203e7403
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: OpenAI | Document: Usage Policies | Record: CA-P-003128
Captured: 2026-03-10 03:28:59 UTC | SHA-256: d69a24617758e5b4…
URL: https://conductatlas.com/platform/openai/usage-policies/prohibition-on-influence-operations-and-synthetic-media-deception/
Accessed: April 29, 2026
Classification
Severity
High
Categories

Other provisions in this document