Perplexity AI · Perplexity Acceptable Use Policy · View original document ↗

Prohibition on Deceptive AI-Origin Content

Medium severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Perplexity AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

You cannot use Perplexity to create content that falsely appears to be made by a human, including deepfakes or fake impersonations of real people.

This analysis describes what Perplexity AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision directly addresses AI transparency concerns by prohibiting users from generating content designed to conceal that it was produced by an AI, which is increasingly a focus of legislation in the EU and several US states.

Interpretive note: The phrase 'designed to deceive others about its AI origin' requires intent assessment, which creates ambiguity in cases where users are unaware of applicable disclosure obligations.

Consumer impact (what this means for users)

Users who generate deepfakes or content that impersonates real individuals without consent violate this policy, and the prohibition on concealing AI origin engages emerging AI disclosure requirements in multiple jurisdictions.

How other platforms handle this

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

Midjourney Medium

Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...

OpenAI Medium

Don't claim to be human when directly and sincerely asked, use AI to deceive people about its fundamental nature, or impersonate real people or organizations in misleading ways.

See all platforms with this clause type →

Monitoring

Perplexity AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
You may not use the Services to generate content designed to deceive others about its AI origin, including creating deepfakes or impersonating real individuals without their consent.

— Excerpt from Perplexity AI's Perplexity Acceptable Use Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: This provision interacts with the EU AI Act's requirements on AI-generated content labeling and transparency, as well as emerging US state laws on deepfakes (California AB 602, AB 730) and AI disclosure (Colorado AI Act). The FTC has indicated that undisclosed AI-generated content in commercial contexts may constitute an unfair or deceptive practice. GOVERNANCE EXPOSURE: Medium. The prohibition is aligned with regulatory direction but enforcement depends on user behavior rather than platform-level technical controls alone. Platforms may face regulatory scrutiny if they fail to implement provenance or watermarking measures consistent with emerging standards. JURISDICTION FLAGS: California, Texas, and Virginia have enacted deepfake-specific legislation. EU users are subject to AI Act transparency obligations. Enterprise users in media and advertising should assess jurisdiction-specific disclosure requirements independently. CONTRACT AND VENDOR IMPLICATIONS: Enterprises using Perplexity for content generation should implement internal review processes to ensure AI-origin disclosure where required by applicable law, as the AUP alone does not substitute for jurisdiction-specific legal compliance. COMPLIANCE CONSIDERATIONS: Compliance teams should monitor evolving AI labeling regulations and assess whether Perplexity provides technical provenance tools (such as watermarking or metadata) that support compliance with those requirements.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has signaled that undisclosed AI-generated content in commercial contexts may constitute an unfair or deceptive practice under the FTC Act.
    File a complaint →
  • State AG
    Several US states including California have enacted deepfake-specific laws enforced by state attorneys general.
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Perplexity Acceptable Use Policy
Entity
Perplexity AI
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 11, 2026
Record ID
CA-P-010546
Document ID
CA-D-00760
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
6d664bd3ce2e23b73f26f6644d636b1fb81e00cce440e455edc0bbedcc549ceb
Analysis generated
May 11, 2026 11:44 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Perplexity AI
Document: Perplexity Acceptable Use Policy
Record ID: CA-P-010546
Captured: 2026-05-11 11:44:15 UTC
SHA-256: 6d664bd3ce2e23b7…
URL: https://conductatlas.com/platform/perplexity-ai/perplexity-acceptable-use-policy/prohibition-on-deceptive-ai-origin-content/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Perplexity AI's Prohibition on Deceptive AI-Origin Content clause do?

This provision directly addresses AI transparency concerns by prohibiting users from generating content designed to conceal that it was produced by an AI, which is increasingly a focus of legislation in the EU and several US states.

How does this clause affect you?

Users who generate deepfakes or content that impersonates real individuals without consent violate this policy, and the prohibition on concealing AI origin engages emerging AI disclosure requirements in multiple jurisdictions.

Is ConductAtlas affiliated with Perplexity AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Perplexity AI.