Perplexity AI · Perplexity Acceptable Use Policy · View original document ↗

Ban on Disinformation and Influence Operations

Medium severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Perplexity AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

You cannot use Perplexity to create fake news, fabricate statements attributed to real people, or run coordinated campaigns to manipulate public opinion.

This analysis describes what Perplexity AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision addresses a major concern with generative AI, specifically that the platform could be used to produce politically or socially manipulative content at scale. The prohibition on fabricating quotes from real people is particularly concrete.

Interpretive note: The scope of 'content designed to manipulate public opinion through deceptive means' is ambiguous at the margins and could capture legitimate persuasive communications depending on enforcement interpretation.

Consumer impact (what this means for users)

Users who generate fake personas, fabricated quotes, or coordinated disinformation campaigns using the platform violate this policy and risk account termination. The policy covers content designed to manipulate public opinion through deceptive means, which includes election-related influence operations.

How other platforms handle this

Telegram Medium

Failure to comply with the Telegram Terms of Service may result in a temporary or a permanent ban from Telegram or some of its services. In such instances, you might lose the benefits of Telegram Premium and we will not compensate you for this loss.

Anthropic Medium

I.2.a. Each party may terminate these Terms at any time for convenience with Notice, except Anthropic must provide 30 days prior Notice. I.2.b. Either party may terminate these Terms for the other party's material breach by providing 30 days prior Notice detailing the nature of the breach unless cur...

Lime Medium

Lime reserves the right to (a) modify or discontinue, temporarily or permanently, the Services (or any part thereof); (b) refuse any user access to the Services for any reason, including if Lime believes that user has violated this Agreement; at any time and without notice or liability to you or to ...

See all platforms with this clause type →

Monitoring

Perplexity AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
You may not use the Services to create or spread disinformation, misinformation, or propaganda, or to conduct influence operations, including generating fake personas, fabricating quotes from real people, or creating content designed to manipulate public opinion through deceptive means.

— Excerpt from Perplexity AI's Perplexity Acceptable Use Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: This provision engages the FTC Act's prohibition on deceptive practices, particularly where fabricated content is used in commercial contexts. It also interacts with election law enforcement by the FEC if influence operations target US elections. In the EU, the Digital Services Act and the Code of Practice on Disinformation are relevant frameworks. The EU AI Act may classify systems used for subliminal manipulation as high-risk or prohibited. GOVERNANCE EXPOSURE: High. The prohibition is broadly worded and covers a wide range of AI-generated content, but enforcement depends on Perplexity's ability to detect disinformation use at scale, which is operationally challenging for a generative AI platform. Compliance teams should assess whether Perplexity's content moderation capabilities are commensurate with this prohibition. JURISDICTION FLAGS: EU users face heightened regulatory exposure under the DSA, which imposes disinformation mitigation obligations on platforms. US state election laws may also apply depending on the content and geography of influence operations. CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers in media, political consulting, or public affairs should assess whether their use cases could implicate this prohibition. The clause's reference to 'content designed to manipulate public opinion through deceptive means' is broad and may capture legitimate persuasive communications depending on interpretation. COMPLIANCE CONSIDERATIONS: Legal teams should map how this provision interacts with DSA compliance obligations for EU-facing operations and assess whether Perplexity provides audit trails or content provenance mechanisms sufficient to demonstrate compliance.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over deceptive practices, including AI-generated deceptive content used in commercial or consumer contexts.
    File a complaint →

Applicable regulations

CFAA
United States Federal

Provision details

Document information
Document
Perplexity Acceptable Use Policy
Entity
Perplexity AI
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 11, 2026
Record ID
CA-P-010545
Document ID
CA-D-00760
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
6d664bd3ce2e23b73f26f6644d636b1fb81e00cce440e455edc0bbedcc549ceb
Analysis generated
May 11, 2026 11:44 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Perplexity AI
Document: Perplexity Acceptable Use Policy
Record ID: CA-P-010545
Captured: 2026-05-11 11:44:15 UTC
SHA-256: 6d664bd3ce2e23b7…
URL: https://conductatlas.com/platform/perplexity-ai/perplexity-acceptable-use-policy/ban-on-disinformation-and-influence-operations/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Perplexity AI's Ban on Disinformation and Influence Operations clause do?

This provision addresses a major concern with generative AI, specifically that the platform could be used to produce politically or socially manipulative content at scale. The prohibition on fabricating quotes from real people is particularly concrete.

How does this clause affect you?

Users who generate fake personas, fabricated quotes, or coordinated disinformation campaigns using the platform violate this policy and risk account termination. The policy covers content designed to manipulate public opinion through deceptive means, which includes election-related influence operations.

Is ConductAtlas affiliated with Perplexity AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Perplexity AI.