You cannot use Claude to generate political propaganda, microtargeting content based on political ideology, or content designed to manipulate people's political views or undermine trust in elections.
This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision covers both overt disinformation and subtler political persuasion content, meaning the prohibition extends beyond false statements to include legitimate-seeming political rhetoric that could manipulate views or sow division.
Interpretive note: The phrase 'unduly alter people's political views' lacks a precise definitional boundary, creating interpretive uncertainty about where legitimate political discourse ends and prohibited influence content begins.
Defense contractors and federal agencies using Claude must find alternatives. Enterprise customers with defense-adjacent business face compliance risk.
This provision means Claude cannot be used to generate political ads, campaign targeting strategies, or content designed to shift your political views without your awareness. It is intended to protect the broader public from AI-amplified political manipulation.
How other platforms handle this
Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...
Certain use cases, such as violence, hate speech, fraud, and privacy violations, are strictly prohibited. Developers must outline and get approval for their use case to access the Cohere API, understanding the models and limitations.
OpenAI prohibits use of its services to build AI personas to conduct covert influence operations, generating content designed for political propaganda or astroturfing campaigns, creating fake social media profiles, and generating content that falsely portrays real people.
Monitoring
Anthropic has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Do Not Engage in Influence Operations or Undermine Electoral Integrity [...] This includes using our products or services to: Generate rhetoric that could unduly alter people's political views, sow division, or be used for political ads, propaganda, or targeting strategies based on political ideology [...] Generate rhetoric that falsely undermines trust in democratic institutions and processes, such as electoral integrity.— Excerpt from Anthropic's Anthropic API Usage Policy
(1) REGULATORY LANDSCAPE: This provision engages FTC Act Section 5 for deceptive political practices, Federal Election Commission regulations on political advertising disclosures, and state-level election integrity laws. In the EU, it interacts with the Digital Services Act's systemic risk provisions for very large online platforms and the AI Act's prohibitions on subliminal manipulation. The prohibition on political microtargeting may also interact with GDPR's restrictions on processing political opinion data as a special category. (2) GOVERNANCE EXPOSURE: Medium. The prohibition on rhetoric that could 'unduly alter people's political views' is operationally broad and potentially difficult to apply consistently, as the line between legitimate political commentary and prohibited persuasion content is not precisely defined in the policy text. Operators running news, civic, or political commentary platforms should assess how this provision applies to their specific deployment context. (3) JURISDICTION FLAGS: EU operators must evaluate the interaction between this provision and GDPR Article 9 restrictions on processing political opinion data. US political campaigns and PACs using Claude must assess FEC disclosure requirements for AI-generated political content. State-level election laws vary significantly and may impose additional obligations. (4) CONTRACT AND VENDOR IMPLICATIONS: Civic technology vendors, political consultancies, and media organizations building on Claude should include explicit contractual representations about compliance with this provision. The breadth of the 'unduly alter political views' language means operators in adjacent spaces (news aggregation, commentary, debate preparation) should seek clarification on permitted use boundaries. (5) COMPLIANCE CONSIDERATIONS: Political technology operators and civic engagement platforms should conduct a specific review of their intended Claude use cases against this provision. AI disclosure requirements for political content are an active area of state legislation (California AB 2655, etc.) and should be monitored alongside this policy.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision covers both overt disinformation and subtler political persuasion content, meaning the prohibition extends beyond false statements to include legitimate-seeming political rhetoric that could manipulate views or sow division.
This provision means Claude cannot be used to generate political ads, campaign targeting strategies, or content designed to shift your political views without your awareness. It is intended to protect the broader public from AI-amplified political manipulation.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.