You cannot use Perplexity to create content that falsely appears to be made by a human, including deepfakes or fake impersonations of real people.
This analysis describes what Perplexity AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision directly addresses AI transparency concerns by prohibiting users from generating content designed to conceal that it was produced by an AI, which is increasingly a focus of legislation in the EU and several US states.
Interpretive note: The phrase 'designed to deceive others about its AI origin' requires intent assessment, which creates ambiguity in cases where users are unaware of applicable disclosure obligations.
Users who generate deepfakes or content that impersonates real individuals without consent violate this policy, and the prohibition on concealing AI origin engages emerging AI disclosure requirements in multiple jurisdictions.
How other platforms handle this
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...
Don't claim to be human when directly and sincerely asked, use AI to deceive people about its fundamental nature, or impersonate real people or organizations in misleading ways.
Monitoring
Perplexity AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"You may not use the Services to generate content designed to deceive others about its AI origin, including creating deepfakes or impersonating real individuals without their consent.— Excerpt from Perplexity AI's Perplexity Acceptable Use Policy
REGULATORY LANDSCAPE: This provision interacts with the EU AI Act's requirements on AI-generated content labeling and transparency, as well as emerging US state laws on deepfakes (California AB 602, AB 730) and AI disclosure (Colorado AI Act). The FTC has indicated that undisclosed AI-generated content in commercial contexts may constitute an unfair or deceptive practice. GOVERNANCE EXPOSURE: Medium. The prohibition is aligned with regulatory direction but enforcement depends on user behavior rather than platform-level technical controls alone. Platforms may face regulatory scrutiny if they fail to implement provenance or watermarking measures consistent with emerging standards. JURISDICTION FLAGS: California, Texas, and Virginia have enacted deepfake-specific legislation. EU users are subject to AI Act transparency obligations. Enterprise users in media and advertising should assess jurisdiction-specific disclosure requirements independently. CONTRACT AND VENDOR IMPLICATIONS: Enterprises using Perplexity for content generation should implement internal review processes to ensure AI-origin disclosure where required by applicable law, as the AUP alone does not substitute for jurisdiction-specific legal compliance. COMPLIANCE CONSIDERATIONS: Compliance teams should monitor evolving AI labeling regulations and assess whether Perplexity provides technical provenance tools (such as watermarking or metadata) that support compliance with those requirements.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision directly addresses AI transparency concerns by prohibiting users from generating content designed to conceal that it was produced by an AI, which is increasingly a focus of legislation in the EU and several US states.
Users who generate deepfakes or content that impersonates real individuals without consent violate this policy, and the prohibition on concealing AI origin engages emerging AI disclosure requirements in multiple jurisdictions.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Perplexity AI.