Perplexity may use the questions you ask and your conversations with its AI to train and improve its AI systems.
This analysis describes what Perplexity AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This means your queries, including potentially sensitive ones about health, finances, or personal matters, could become part of the data used to build Perplexity's AI models.
Interpretive note: The exact verbatim language of this provision could not be fully extracted from the rendered HTML source; the characterization is based on available document text and publicly known terms of this policy.
Users who submit personal, health-related, or financial queries to Perplexity should be aware that this interaction content may be repurposed for AI model training, which goes beyond the immediate service delivery context most users would expect.
How other platforms handle this
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
Writer does not use Customer Data to train its AI models without explicit customer permission. Customer Data means the data, content, and information that customers and their end users submit to or through the Services.
Monitoring
Perplexity AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We may use the information we collect, including the content of your searches and interactions with our AI, to train, improve, and develop our AI models and services.— Excerpt from Perplexity AI's Perplexity Privacy Policy
(1) REGULATORY LANDSCAPE: This provision engages GDPR Articles on purpose limitation and legitimate interests for EEA and UK users, as well as CCPA and CPRA for California residents, particularly regarding sensitive personal information. EU data protection authorities, including the EDPB, have issued guidance indicating that repurposing personal data for AI training may require a compatibility assessment or explicit consent depending on the nature of the original data and the sensitivity of content involved. The FTC may evaluate this practice under its unfair or deceptive acts standards if consumers are not adequately informed. (2) GOVERNANCE EXPOSURE: High. The use of open-ended search query content for model training creates meaningful exposure because users frequently submit sensitive personal information in search queries without awareness that such content may be retained and repurposed. This is particularly acute for queries touching on health conditions, legal situations, or financial circumstances. (3) JURISDICTION FLAGS: EU and UK users have the strongest protections; legitimate interests as a legal basis for AI training faces heightened scrutiny and may require a documented balancing test. California residents may characterize this as processing of sensitive personal information under CPRA if queries reveal health, financial, or other protected categories. Illinois, New York, and other states with emerging privacy legislation may create additional exposure as those laws mature. (4) CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers deploying Perplexity in workflows involving confidential data, protected health information, or privileged communications face potential conflict between this provision and their own data governance obligations. Procurement teams should assess whether a data processing agreement is available and whether the AI training provision can be contractually limited. (5) COMPLIANCE CONSIDERATIONS: Compliance teams should evaluate whether current consent mechanisms and privacy notices provide sufficiently clear disclosure of AI training data use to satisfy informed consent standards across relevant jurisdictions. Data mapping should explicitly document the query-to-training-data pipeline. A DPIA may be warranted for enterprise or high-volume deployments.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This means your queries, including potentially sensitive ones about health, finances, or personal matters, could become part of the data used to build Perplexity's AI models.
Users who submit personal, health-related, or financial queries to Perplexity should be aware that this interaction content may be repurposed for AI model training, which goes beyond the immediate service delivery context most users would expect.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Perplexity AI.