Anthropic · Anthropic API Usage Policy · View original document ↗

Election and Political Influence Restrictions

Medium severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Anthropic Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

You cannot use Claude to generate political propaganda, microtargeting content based on political ideology, or content designed to manipulate people's political views or undermine trust in elections.

This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision covers both overt disinformation and subtler political persuasion content, meaning the prohibition extends beyond false statements to include legitimate-seeming political rhetoric that could manipulate views or sow division.

Interpretive note: The phrase 'unduly alter people's political views' lacks a precise definitional boundary, creating interpretive uncertainty about where legitimate political discourse ends and prohibited influence content begins.

Recent Activity

This document changed recently

High Feb 27, 2026

Defense contractors and federal agencies using Claude must find alternatives. Enterprise customers with defense-adjacent business face compliance risk.

Consumer impact (what this means for users)

This provision means Claude cannot be used to generate political ads, campaign targeting strategies, or content designed to shift your political views without your awareness. It is intended to protect the broader public from AI-amplified political manipulation.

How other platforms handle this

Midjourney Medium

Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...

Cohere Medium

Certain use cases, such as violence, hate speech, fraud, and privacy violations, are strictly prohibited. Developers must outline and get approval for their use case to access the Cohere API, understanding the models and limitations.

OpenAI Medium

OpenAI prohibits use of its services to build AI personas to conduct covert influence operations, generating content designed for political propaganda or astroturfing campaigns, creating fake social media profiles, and generating content that falsely portrays real people.

See all platforms with this clause type →

Monitoring

Anthropic has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Do Not Engage in Influence Operations or Undermine Electoral Integrity [...] This includes using our products or services to: Generate rhetoric that could unduly alter people's political views, sow division, or be used for political ads, propaganda, or targeting strategies based on political ideology [...] Generate rhetoric that falsely undermines trust in democratic institutions and processes, such as electoral integrity.

— Excerpt from Anthropic's Anthropic API Usage Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: This provision engages FTC Act Section 5 for deceptive political practices, Federal Election Commission regulations on political advertising disclosures, and state-level election integrity laws. In the EU, it interacts with the Digital Services Act's systemic risk provisions for very large online platforms and the AI Act's prohibitions on subliminal manipulation. The prohibition on political microtargeting may also interact with GDPR's restrictions on processing political opinion data as a special category. (2) GOVERNANCE EXPOSURE: Medium. The prohibition on rhetoric that could 'unduly alter people's political views' is operationally broad and potentially difficult to apply consistently, as the line between legitimate political commentary and prohibited persuasion content is not precisely defined in the policy text. Operators running news, civic, or political commentary platforms should assess how this provision applies to their specific deployment context. (3) JURISDICTION FLAGS: EU operators must evaluate the interaction between this provision and GDPR Article 9 restrictions on processing political opinion data. US political campaigns and PACs using Claude must assess FEC disclosure requirements for AI-generated political content. State-level election laws vary significantly and may impose additional obligations. (4) CONTRACT AND VENDOR IMPLICATIONS: Civic technology vendors, political consultancies, and media organizations building on Claude should include explicit contractual representations about compliance with this provision. The breadth of the 'unduly alter political views' language means operators in adjacent spaces (news aggregation, commentary, debate preparation) should seek clarification on permitted use boundaries. (5) COMPLIANCE CONSIDERATIONS: Political technology operators and civic engagement platforms should conduct a specific review of their intended Claude use cases against this provision. AI disclosure requirements for political content are an active area of state legislation (California AB 2655, etc.) and should be monitored alongside this policy.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over deceptive practices in political and commercial communications under Section 5 of the FTC Act
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Anthropic API Usage Policy
Entity
Anthropic
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 11, 2026
Record ID
CA-P-009964
Document ID
CA-D-00013
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
60e693438d9f7f47deb8f3bfb819343e26b5fe0eb90d56280568f1dd95ae660f
Analysis generated
May 11, 2026 00:39 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Anthropic
Document: Anthropic API Usage Policy
Record ID: CA-P-009964
Captured: 2026-05-11 00:39:26 UTC
SHA-256: 60e693438d9f7f47…
URL: https://conductatlas.com/platform/anthropic/anthropic-api-usage-policy/election-and-political-influence-restrictions/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Anthropic's Election and Political Influence Restrictions clause do?

This provision covers both overt disinformation and subtler political persuasion content, meaning the prohibition extends beyond false statements to include legitimate-seeming political rhetoric that could manipulate views or sow division.

How does this clause affect you?

This provision means Claude cannot be used to generate political ads, campaign targeting strategies, or content designed to shift your political views without your awareness. It is intended to protect the broader public from AI-amplified political manipulation.

Is ConductAtlas affiliated with Anthropic?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.