Developers are prohibited from using the API to build competing products, generate harmful content, reverse-engineer Perplexity's technology, or deploy it in high-risk fields like medical or legal applications without safeguards.
This analysis describes what Perplexity AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
The prohibition on high-risk applications without safeguards is operationally significant because it places the compliance burden on the developer to determine what constitutes an adequate safeguard, without defining that standard in the agreement.
Interpretive note: The definition of adequate safeguards for high-risk applications and the scope of what constitutes a competing product are not specified in the document, creating interpretive uncertainty for developers.
End users of applications built on the Perplexity API should be aware that the API is contractually prohibited from being used for medical diagnosis or legal advice without adequate safeguards, though the definition of adequate safeguards is left to developer discretion under these terms.
How other platforms handle this
You agree to comply with Adyen's Acceptable Use Policy, as updated from time to time, which forms part of these Terms and Conditions. Adyen reserves the right to update the Acceptable Use Policy at any time.
You may not use the Venmo services for any illegal purpose, to send money to any person or organization on a government sanctions list, for gambling, for purchasing or selling illegal goods or services, or for any activity that violates applicable law. You may not use Venmo for commercial transactio...
Customer and its Users must use the Products in accordance with the Atlassian Acceptable Use Policy. Customer is responsible for ensuring that Users comply with this Agreement and the Atlassian Acceptable Use Policy.
Monitoring
Perplexity AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"You may not use the API to: (a) develop applications that compete directly with Perplexity's core products; (b) generate content that is illegal, harmful, abusive, or violates third-party rights; (c) attempt to circumvent or reverse-engineer the underlying AI models or infrastructure; (d) use the API in any high-risk application without appropriate safeguards, including but not limited to medical diagnosis, legal advice, or safety-critical systems.— Excerpt from Perplexity AI's Perplexity API Terms of Service
REGULATORY LANDSCAPE: The restriction on high-risk applications engages the EU AI Act, which classifies certain AI system uses as high-risk and imposes conformity assessment and transparency requirements on providers and deployers. FDA regulations may apply if the API is used in a medical device context. The FTC has enforcement authority over deceptive health or legal claims made through AI-generated outputs. State attorney general offices may enforce consumer protection claims where AI outputs in high-risk domains cause harm. GOVERNANCE EXPOSURE: Medium. The high-risk application restriction is a disclosure and liability management mechanism rather than an operational safeguard. The agreement does not define what constitutes appropriate safeguards, leaving developers to make that determination. This ambiguity creates compliance risk, particularly for developers in regulated industries. JURISDICTION FLAGS: EU/EEA deployments face the most significant exposure under the EU AI Act's high-risk system classification, which may require formal conformity assessments independent of contractual terms. Healthcare-adjacent deployments in the US may engage FDA oversight depending on intended use. Legal-tech applications may face state bar association rules regarding unauthorized practice of law. CONTRACT AND VENDOR IMPLICATIONS: The non-compete restriction prohibiting applications that compete directly with Perplexity's core products may warrant legal review to assess scope and enforceability, particularly for developers whose roadmaps include AI search or answer-engine functionality. The undefined scope of permitted competitive use creates ongoing interpretive risk. COMPLIANCE CONSIDERATIONS: Developers should document their assessment of whether their use case constitutes a high-risk application and what safeguards have been implemented. Legal counsel should review the interaction between this restriction and applicable AI governance frameworks in the deployment jurisdiction. Product teams should implement clear user-facing disclaimers where AI-generated outputs relate to health, legal, or safety-critical topics.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
The prohibition on high-risk applications without safeguards is operationally significant because it places the compliance burden on the developer to determine what constitutes an adequate safeguard, without defining that standard in the agreement.
End users of applications built on the Perplexity API should be aware that the API is contractually prohibited from being used for medical diagnosis or legal advice without adequate safeguards, though the definition of adequate safeguards is left to developer discretion under these terms.
ConductAtlas has identified this type of provision across 8 platforms. See the full comparison.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Perplexity AI.