Users cannot use Claude to create or spread false information, manipulated media, or deceptive content designed to mislead people.
This analysis describes what Anthropic's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision applies broadly to any content designed to mislead, which can include subtle misrepresentations and not just obvious falsehoods, and has particular relevance for media, journalism, and communications professionals.
Interpretive note: The full text of the misinformation provision was truncated in the provided document, so the complete scope of prohibited conduct cannot be fully assessed.
Defense contractors and federal agencies using Claude must find alternatives. Enterprise customers with defense-adjacent business face compliance risk.
This provision protects consumers from being targeted with AI-generated misinformation or synthetic media created through Claude. It also means Claude cannot be used to build products whose primary purpose is spreading false information at scale.
How other platforms handle this
Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Don't claim to be human when directly and sincerely asked, use AI to deceive people about its fundamental nature, or impersonate real people or organizations in misleading ways.
Monitoring
Anthropic has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Do Not Create or Spread Misinformation [...] This includes using our products or services to: [generate false or misleading information, synthetic media, deceptive content]— Excerpt from Anthropic's Anthropic API Usage Policy
(1) REGULATORY LANDSCAPE: This provision engages FTC Act Section 5 for deceptive commercial practices, state consumer protection laws prohibiting false advertising and deceptive marketing, and in the EU, the Digital Services Act's provisions on illegal content and systemic disinformation risks for designated platforms. Synthetic media (deepfake) regulations are an active area of state legislation in the US, including California, Texas, and Virginia. (2) GOVERNANCE EXPOSURE: Medium. The prohibition on misinformation is operationally challenging to enforce uniformly given the subjective nature of determining what constitutes 'misleading' content. Operators in media, marketing, and communications must carefully assess their content generation workflows against this provision. (3) JURISDICTION FLAGS: EU operators face additional obligations under the DSA's Code of Practice on Disinformation. California, Texas, and other states with deepfake-specific legislation create heightened exposure for synthetic media use cases. Political advertising contexts create additional jurisdiction-specific obligations. (4) CONTRACT AND VENDOR IMPLICATIONS: Marketing agencies, PR firms, and content platforms using Claude for content generation should implement review processes to avoid inadvertent policy violations. The prohibition on synthetic media and manipulated content requires specific controls for any media production workflow. (5) COMPLIANCE CONSIDERATIONS: Operators should implement disclosure mechanisms for AI-generated content to reduce misinformation risk and align with emerging regulatory requirements. Watermarking, provenance, and content authenticity controls should be evaluated as part of compliance with this provision and applicable law.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision applies broadly to any content designed to mislead, which can include subtle misrepresentations and not just obvious falsehoods, and has particular relevance for media, journalism, and communications professionals.
This provision protects consumers from being targeted with AI-generated misinformation or synthetic media created through Claude. It also means Claude cannot be used to build products whose primary purpose is spreading false information at scale.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic.