You cannot use Perplexity to create fake news, fabricate statements attributed to real people, or run coordinated campaigns to manipulate public opinion.
This analysis describes what Perplexity AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision addresses a major concern with generative AI, specifically that the platform could be used to produce politically or socially manipulative content at scale. The prohibition on fabricating quotes from real people is particularly concrete.
Interpretive note: The scope of 'content designed to manipulate public opinion through deceptive means' is ambiguous at the margins and could capture legitimate persuasive communications depending on enforcement interpretation.
Users who generate fake personas, fabricated quotes, or coordinated disinformation campaigns using the platform violate this policy and risk account termination. The policy covers content designed to manipulate public opinion through deceptive means, which includes election-related influence operations.
How other platforms handle this
Failure to comply with the Telegram Terms of Service may result in a temporary or a permanent ban from Telegram or some of its services. In such instances, you might lose the benefits of Telegram Premium and we will not compensate you for this loss.
I.2.a. Each party may terminate these Terms at any time for convenience with Notice, except Anthropic must provide 30 days prior Notice. I.2.b. Either party may terminate these Terms for the other party's material breach by providing 30 days prior Notice detailing the nature of the breach unless cur...
Lime reserves the right to (a) modify or discontinue, temporarily or permanently, the Services (or any part thereof); (b) refuse any user access to the Services for any reason, including if Lime believes that user has violated this Agreement; at any time and without notice or liability to you or to ...
Monitoring
Perplexity AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"You may not use the Services to create or spread disinformation, misinformation, or propaganda, or to conduct influence operations, including generating fake personas, fabricating quotes from real people, or creating content designed to manipulate public opinion through deceptive means.— Excerpt from Perplexity AI's Perplexity Acceptable Use Policy
REGULATORY LANDSCAPE: This provision engages the FTC Act's prohibition on deceptive practices, particularly where fabricated content is used in commercial contexts. It also interacts with election law enforcement by the FEC if influence operations target US elections. In the EU, the Digital Services Act and the Code of Practice on Disinformation are relevant frameworks. The EU AI Act may classify systems used for subliminal manipulation as high-risk or prohibited. GOVERNANCE EXPOSURE: High. The prohibition is broadly worded and covers a wide range of AI-generated content, but enforcement depends on Perplexity's ability to detect disinformation use at scale, which is operationally challenging for a generative AI platform. Compliance teams should assess whether Perplexity's content moderation capabilities are commensurate with this prohibition. JURISDICTION FLAGS: EU users face heightened regulatory exposure under the DSA, which imposes disinformation mitigation obligations on platforms. US state election laws may also apply depending on the content and geography of influence operations. CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers in media, political consulting, or public affairs should assess whether their use cases could implicate this prohibition. The clause's reference to 'content designed to manipulate public opinion through deceptive means' is broad and may capture legitimate persuasive communications depending on interpretation. COMPLIANCE CONSIDERATIONS: Legal teams should map how this provision interacts with DSA compliance obligations for EU-facing operations and assess whether Perplexity provides audit trails or content provenance mechanisms sufficient to demonstrate compliance.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision addresses a major concern with generative AI, specifically that the platform could be used to produce politically or socially manipulative content at scale. The prohibition on fabricating quotes from real people is particularly concrete.
Users who generate fake personas, fabricated quotes, or coordinated disinformation campaigns using the platform violate this policy and risk account termination. The policy covers content designed to manipulate public opinion through deceptive means, which includes election-related influence operations.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Perplexity AI.