Generating, sharing, or enabling content that sexually exploits children is strictly prohibited and likely illegal under applicable law.
This analysis describes what Perplexity AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This is an absolute prohibition aligned with US and international law; violations may expose users to criminal liability beyond account termination.
Any user who generates or facilitates CSAM through the platform violates this policy and faces account action, as well as potential criminal prosecution under applicable federal and international law.
How other platforms handle this
You may not use the Services to: violate the security or integrity of any network, computer or communications system, software application, or network or computing device; access or use any system without permission, including attempting to probe, scan, or test the vulnerability of a system or to br...
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...
Monitoring
Perplexity AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"You may not use the Services to generate, distribute, or facilitate child sexual abuse material (CSAM) or any content that sexually exploits or harms minors.— Excerpt from Perplexity AI's Perplexity Acceptable Use Policy
REGULATORY LANDSCAPE: This provision directly implicates US federal law including the PROTECT Act and 18 U.S.C. 2256 et seq., enforced by DOJ and with mandatory reporting obligations to NCMEC under federal law. Internationally, similar prohibitions are enforced under national criminal codes and EU Directive 2011/93/EU. The FTC does not have primary jurisdiction here; criminal enforcement applies. GOVERNANCE EXPOSURE: High. Platforms that generate AI content must implement technical safeguards to prevent CSAM generation. Failure to do so creates criminal and civil liability exposure for the platform, not just the user. Compliance teams should ensure content moderation systems are audited against this requirement. JURISDICTION FLAGS: This prohibition applies globally and is consistent with law in virtually all jurisdictions. However, mandatory reporting obligations vary; US-based platforms have specific NCMEC reporting requirements under federal law. CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers deploying Perplexity in environments accessible to minors should assess whether additional safeguards are in place beyond the AUP prohibition, and whether contractual representations from Perplexity regarding content moderation are adequate. COMPLIANCE CONSIDERATIONS: Compliance teams should verify that Perplexity's technical implementation includes classifiers or filters sufficient to prevent CSAM generation, and should review Perplexity's incident response procedures for NCMEC reporting compliance.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This is an absolute prohibition aligned with US and international law; violations may expose users to criminal liability beyond account termination.
Any user who generates or facilitates CSAM through the platform violates this policy and faces account action, as well as potential criminal prosecution under applicable federal and international law.
ConductAtlas has identified this type of provision across 1 platforms. See the full comparison.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Perplexity AI.