Anthropic · Anthropic API Usage Policy · View original document ↗

Misinformation and Deceptive Content Prohibition

Medium severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Anthropic Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Users cannot use Claude to create or spread false information, manipulated media, or deceptive content designed to mislead people.

This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision applies broadly to any content designed to mislead, which can include subtle misrepresentations and not just obvious falsehoods, and has particular relevance for media, journalism, and communications professionals.

Interpretive note: The full text of the misinformation provision was truncated in the provided document, so the complete scope of prohibited conduct cannot be fully assessed.

Recent Activity

This document changed recently

High Feb 27, 2026

Defense contractors and federal agencies using Claude must find alternatives. Enterprise customers with defense-adjacent business face compliance risk.

Consumer impact (what this means for users)

This provision protects consumers from being targeted with AI-generated misinformation or synthetic media created through Claude. It also means Claude cannot be used to build products whose primary purpose is spreading false information at scale.

How other platforms handle this

Midjourney Medium

Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

OpenAI Medium

Don't claim to be human when directly and sincerely asked, use AI to deceive people about its fundamental nature, or impersonate real people or organizations in misleading ways.

See all platforms with this clause type →

Monitoring

Anthropic has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Do Not Create or Spread Misinformation [...] This includes using our products or services to: [generate false or misleading information, synthetic media, deceptive content]

— Excerpt from Anthropic's Anthropic API Usage Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: This provision engages FTC Act Section 5 for deceptive commercial practices, state consumer protection laws prohibiting false advertising and deceptive marketing, and in the EU, the Digital Services Act's provisions on illegal content and systemic disinformation risks for designated platforms. Synthetic media (deepfake) regulations are an active area of state legislation in the US, including California, Texas, and Virginia. (2) GOVERNANCE EXPOSURE: Medium. The prohibition on misinformation is operationally challenging to enforce uniformly given the subjective nature of determining what constitutes 'misleading' content. Operators in media, marketing, and communications must carefully assess their content generation workflows against this provision. (3) JURISDICTION FLAGS: EU operators face additional obligations under the DSA's Code of Practice on Disinformation. California, Texas, and other states with deepfake-specific legislation create heightened exposure for synthetic media use cases. Political advertising contexts create additional jurisdiction-specific obligations. (4) CONTRACT AND VENDOR IMPLICATIONS: Marketing agencies, PR firms, and content platforms using Claude for content generation should implement review processes to avoid inadvertent policy violations. The prohibition on synthetic media and manipulated content requires specific controls for any media production workflow. (5) COMPLIANCE CONSIDERATIONS: Operators should implement disclosure mechanisms for AI-generated content to reduce misinformation risk and align with emerging regulatory requirements. Watermarking, provenance, and content authenticity controls should be evaluated as part of compliance with this provision and applicable law.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over deceptive practices in commercial communications including AI-generated content under Section 5 of the FTC Act
    File a complaint →
  • State AG
    State attorneys general have authority under state consumer protection and deepfake laws to address synthetic media and deceptive content violations
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Anthropic API Usage Policy
Entity
Anthropic
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 11, 2026
Record ID
CA-P-009969
Document ID
CA-D-00013
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
60e693438d9f7f47deb8f3bfb819343e26b5fe0eb90d56280568f1dd95ae660f
Analysis generated
May 11, 2026 00:39 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Anthropic
Document: Anthropic API Usage Policy
Record ID: CA-P-009969
Captured: 2026-05-11 00:39:26 UTC
SHA-256: 60e693438d9f7f47…
URL: https://conductatlas.com/platform/anthropic/anthropic-api-usage-policy/misinformation-and-deceptive-content-prohibition/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Anthropic's Misinformation and Deceptive Content Prohibition clause do?

This provision applies broadly to any content designed to mislead, which can include subtle misrepresentations and not just obvious falsehoods, and has particular relevance for media, journalism, and communications professionals.

How does this clause affect you?

This provision protects consumers from being targeted with AI-generated misinformation or synthetic media created through Claude. It also means Claude cannot be used to build products whose primary purpose is spreading false information at scale.

Is ConductAtlas affiliated with Anthropic?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.