Perplexity AI · Perplexity API Terms of Service · View original document ↗

Acceptable Use Restrictions

Medium severity Medium confidence Inferredfromcontext Rare · 8 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Perplexity AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Developers are prohibited from using the API to build competing products, generate harmful content, reverse-engineer Perplexity's technology, or deploy it in high-risk fields like medical or legal applications without safeguards.

This analysis describes what Perplexity AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

The prohibition on high-risk applications without safeguards is operationally significant because it places the compliance burden on the developer to determine what constitutes an adequate safeguard, without defining that standard in the agreement.

Interpretive note: The definition of adequate safeguards for high-risk applications and the scope of what constitutes a competing product are not specified in the document, creating interpretive uncertainty for developers.

Consumer impact (what this means for users)

End users of applications built on the Perplexity API should be aware that the API is contractually prohibited from being used for medical diagnosis or legal advice without adequate safeguards, though the definition of adequate safeguards is left to developer discretion under these terms.

How other platforms handle this

Adyen Medium

You agree to comply with Adyen's Acceptable Use Policy, as updated from time to time, which forms part of these Terms and Conditions. Adyen reserves the right to update the Acceptable Use Policy at any time.

Venmo Medium

You may not use the Venmo services for any illegal purpose, to send money to any person or organization on a government sanctions list, for gambling, for purchasing or selling illegal goods or services, or for any activity that violates applicable law. You may not use Venmo for commercial transactio...

Atlassian Medium

Customer and its Users must use the Products in accordance with the Atlassian Acceptable Use Policy. Customer is responsible for ensuring that Users comply with this Agreement and the Atlassian Acceptable Use Policy.

See all platforms with this clause type →

Monitoring

Perplexity AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
You may not use the API to: (a) develop applications that compete directly with Perplexity's core products; (b) generate content that is illegal, harmful, abusive, or violates third-party rights; (c) attempt to circumvent or reverse-engineer the underlying AI models or infrastructure; (d) use the API in any high-risk application without appropriate safeguards, including but not limited to medical diagnosis, legal advice, or safety-critical systems.

— Excerpt from Perplexity AI's Perplexity API Terms of Service

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: The restriction on high-risk applications engages the EU AI Act, which classifies certain AI system uses as high-risk and imposes conformity assessment and transparency requirements on providers and deployers. FDA regulations may apply if the API is used in a medical device context. The FTC has enforcement authority over deceptive health or legal claims made through AI-generated outputs. State attorney general offices may enforce consumer protection claims where AI outputs in high-risk domains cause harm. GOVERNANCE EXPOSURE: Medium. The high-risk application restriction is a disclosure and liability management mechanism rather than an operational safeguard. The agreement does not define what constitutes appropriate safeguards, leaving developers to make that determination. This ambiguity creates compliance risk, particularly for developers in regulated industries. JURISDICTION FLAGS: EU/EEA deployments face the most significant exposure under the EU AI Act's high-risk system classification, which may require formal conformity assessments independent of contractual terms. Healthcare-adjacent deployments in the US may engage FDA oversight depending on intended use. Legal-tech applications may face state bar association rules regarding unauthorized practice of law. CONTRACT AND VENDOR IMPLICATIONS: The non-compete restriction prohibiting applications that compete directly with Perplexity's core products may warrant legal review to assess scope and enforceability, particularly for developers whose roadmaps include AI search or answer-engine functionality. The undefined scope of permitted competitive use creates ongoing interpretive risk. COMPLIANCE CONSIDERATIONS: Developers should document their assessment of whether their use case constitutes a high-risk application and what safeguards have been implemented. Legal counsel should review the interaction between this restriction and applicable AI governance frameworks in the deployment jurisdiction. Product teams should implement clear user-facing disclaimers where AI-generated outputs relate to health, legal, or safety-critical topics.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    FTC has authority over deceptive health or legal claims made through AI-generated outputs in consumer-facing applications
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Perplexity API Terms of Service
Entity
Perplexity AI
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 11, 2026
Record ID
CA-P-010515
Document ID
CA-D-00761
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
bbeaeced21ec200c9f5050d92d6528e40f6aff710fe415264202f9e6a8991f47
Analysis generated
May 11, 2026 11:26 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Perplexity AI
Document: Perplexity API Terms of Service
Record ID: CA-P-010515
Captured: 2026-05-11 11:26:56 UTC
SHA-256: bbeaeced21ec200c…
URL: https://conductatlas.com/platform/perplexity-ai/perplexity-api-terms-of-service/acceptable-use-restrictions/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Perplexity AI's Acceptable Use Restrictions clause do?

The prohibition on high-risk applications without safeguards is operationally significant because it places the compliance burden on the developer to determine what constitutes an adequate safeguard, without defining that standard in the agreement.

How does this clause affect you?

End users of applications built on the Perplexity API should be aware that the API is contractually prohibited from being used for medical diagnosis or legal advice without adequate safeguards, though the definition of adequate safeguards is left to developer discretion under these terms.

How many platforms have this type of clause?

ConductAtlas has identified this type of provision across 8 platforms. See the full comparison.

Is ConductAtlas affiliated with Perplexity AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Perplexity AI.