Perplexity AI · Perplexity Privacy Policy · View original document ↗

AI Model Training Use of User Data

High severity Medium confidence Inferredfromcontext Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Perplexity AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Perplexity may use the questions you ask and your conversations with its AI to train and improve its AI systems.

This analysis describes what Perplexity AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This means your queries, including potentially sensitive ones about health, finances, or personal matters, could become part of the data used to build Perplexity's AI models.

Interpretive note: The exact verbatim language of this provision could not be fully extracted from the rendered HTML source; the characterization is based on available document text and publicly known terms of this policy.

Consumer impact (what this means for users)

Users who submit personal, health-related, or financial queries to Perplexity should be aware that this interaction content may be repurposed for AI model training, which goes beyond the immediate service delivery context most users would expect.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Delete Your Data
    Email privacy@perplexity.ai to request deletion of your personal data including query history. Specify that you are requesting deletion of interaction and search query data used for any purpose including AI model training.

How other platforms handle this

Windsurf Medium

We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...

Ideogram Medium

We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.

Writer Medium

Writer does not use Customer Data to train its AI models without explicit customer permission. Customer Data means the data, content, and information that customers and their end users submit to or through the Services.

See all platforms with this clause type →

Monitoring

Perplexity AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
We may use the information we collect, including the content of your searches and interactions with our AI, to train, improve, and develop our AI models and services.

— Excerpt from Perplexity AI's Perplexity Privacy Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: This provision engages GDPR Articles on purpose limitation and legitimate interests for EEA and UK users, as well as CCPA and CPRA for California residents, particularly regarding sensitive personal information. EU data protection authorities, including the EDPB, have issued guidance indicating that repurposing personal data for AI training may require a compatibility assessment or explicit consent depending on the nature of the original data and the sensitivity of content involved. The FTC may evaluate this practice under its unfair or deceptive acts standards if consumers are not adequately informed. (2) GOVERNANCE EXPOSURE: High. The use of open-ended search query content for model training creates meaningful exposure because users frequently submit sensitive personal information in search queries without awareness that such content may be retained and repurposed. This is particularly acute for queries touching on health conditions, legal situations, or financial circumstances. (3) JURISDICTION FLAGS: EU and UK users have the strongest protections; legitimate interests as a legal basis for AI training faces heightened scrutiny and may require a documented balancing test. California residents may characterize this as processing of sensitive personal information under CPRA if queries reveal health, financial, or other protected categories. Illinois, New York, and other states with emerging privacy legislation may create additional exposure as those laws mature. (4) CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers deploying Perplexity in workflows involving confidential data, protected health information, or privileged communications face potential conflict between this provision and their own data governance obligations. Procurement teams should assess whether a data processing agreement is available and whether the AI training provision can be contractually limited. (5) COMPLIANCE CONSIDERATIONS: Compliance teams should evaluate whether current consent mechanisms and privacy notices provide sufficiently clear disclosure of AI training data use to satisfy informed consent standards across relevant jurisdictions. Data mapping should explicitly document the query-to-training-data pipeline. A DPIA may be warranted for enterprise or high-volume deployments.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has jurisdiction over unfair or deceptive data practices and has expressed interest in AI training data use; this provision may be evaluated under the FTC Act if consumer notice is found inadequate.
    File a complaint →

Applicable regulations

EU AI Act
European Union
California AB 2013 AI Training Data Transparency
US-CA
Colorado AI Act
US-CO
EU AI Act - High Risk Provisions
EU
GDPR
European Union
Texas AI Act
Texas, USA
Trump Executive Order on AI Policy Framework
US
UK GDPR
United Kingdom

Provision details

Document information
Document
Perplexity Privacy Policy
Entity
Perplexity AI
Document last updated
May 5, 2026
Tracking information
First tracked
May 8, 2026
Last verified
May 11, 2026
Record ID
CA-P-010346
Document ID
CA-D-00510
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
fca7662177c01e9e64b7c0ea113ed973b3479ee8b95ba564762d7653de962e8a
Analysis generated
May 8, 2026 15:07 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Perplexity AI
Document: Perplexity Privacy Policy
Record ID: CA-P-010346
Captured: 2026-05-08 15:07:23 UTC
SHA-256: fca7662177c01e9e…
URL: https://conductatlas.com/platform/perplexity-ai/perplexity-privacy-policy/ai-model-training-use-of-user-data/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Related Analysis

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Perplexity AI's AI Model Training Use of User Data clause do?

This means your queries, including potentially sensitive ones about health, finances, or personal matters, could become part of the data used to build Perplexity's AI models.

How does this clause affect you?

Users who submit personal, health-related, or financial queries to Perplexity should be aware that this interaction content may be repurposed for AI model training, which goes beyond the immediate service delivery context most users would expect.

Is ConductAtlas affiliated with Perplexity AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Perplexity AI.