You are not allowed to try to bypass or disable Perplexity's built-in content safety filters or restrictions.
This analysis describes what Perplexity AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision protects the integrity of Perplexity's safety systems and means that attempts to jailbreak or manipulate the AI into producing restricted content constitute a policy violation.
Interpretive note: Whether prompt-based manipulation (as opposed to technical exploitation) constitutes 'circumvention' under applicable law such as the CFAA is legally uncertain.
Users who attempt to jailbreak or otherwise circumvent Perplexity's safety filters violate this policy and risk account termination; the provision applies to both technical circumvention and prompt-based manipulation attempts.
How other platforms handle this
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...
You may not use the Services, including any outputs, to develop, train, fine-tune, or improve any machine learning model or artificial intelligence system that competes with AI21's products or services.
Monitoring
Perplexity AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.— Excerpt from Perplexity AI's Perplexity Acceptable Use Policy
REGULATORY LANDSCAPE: This provision interacts with the Computer Fraud and Abuse Act (CFAA) in the US, which may apply to technical circumvention of platform security measures. The EU AI Act requires providers of certain AI systems to implement and maintain safety measures, making user circumvention a compliance concern for both the platform and potentially the user. GOVERNANCE EXPOSURE: Medium. The provision is broadly worded and covers both technical and behavioral circumvention attempts. Enforcement requires Perplexity to maintain robust detection capabilities, and the AUP does not describe the specific mechanisms by which circumvention is detected or acted upon. JURISDICTION FLAGS: CFAA applicability to prompt-based jailbreaking (as opposed to technical exploitation) is legally uncertain in the US; courts have not consistently resolved whether prompt manipulation constitutes unauthorized access. EU users face additional scrutiny under the AI Act's safety requirements. CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers should be aware that internal testing or red-teaming of Perplexity's safety measures by their own security teams could be characterized as circumvention under this clause without a separate agreement with Perplexity. COMPLIANCE CONSIDERATIONS: Legal teams should assess whether internal security testing protocols need to be disclosed to or approved by Perplexity to avoid AUP violations, and should seek clarification on whether authorized security research is exempt.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision protects the integrity of Perplexity's safety systems and means that attempts to jailbreak or manipulate the AI into producing restricted content constitute a policy violation.
Users who attempt to jailbreak or otherwise circumvent Perplexity's safety filters violate this policy and risk account termination; the provision applies to both technical circumvention and prompt-based manipulation attempts.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Perplexity AI.