10 Total
3 High severity
6 Medium severity
1 Low severity
Summary

This is Pika's rulebook for how users are allowed to use its AI video generation tool. It prohibits a wide range of content including nonconsensual deepfakes, child sexual abuse material, impersonation of real people without consent, and use of the service for political advertising or professional advice. If you upload images of real people, you are responsible for having their consent, and you must disclose when any output is AI-generated or artificially manipulated.

Technical / Legal Breakdown

This Acceptable Use Policy (AUP), published by Mellis, Inc. (operating as Pika) and last updated May 16, 2025, governs user conduct and content on Pika's AI video generation service, operating as a supplement to the Terms of Service and binding on all users as a condition of access. The AUP asserts that users are solely responsible for all inputs and outputs, prohibits a comprehensive list of uses including nonconsensual deepfake sexual content, child exploitation material, impersonation, political campaigning, unauthorized advertising, and weapons-related content, and reserves to Pika sole discretion to monitor use, remove content, suspend or terminate accounts, and report violations to law enforcement including NCMEC. The AUP includes a broadly worded catchall provision authorizing removal of content or access that Pika determines, in its sole discretion, poses a risk to safety, integrity, legal compliance, or proper functioning, even if not expressly prohibited; this clause grants significant unilateral enforcement authority, and its practical scope relative to applicable consumer protection or due process requirements may vary by jurisdiction. The policy engages AI-specific regulatory frameworks including laws applicable to design, development, deployment, and use of AI technology, as well as privacy and data protection laws; the explicit reference to digital replicas and consent requirements implicates emerging state-level AI persona legislation such as those enacted in California, Tennessee, and Texas. Users in jurisdictions with established AI governance frameworks, deepfake disclosure mandates, or rights of publicity statutes may have legal protections that operate independently of or in addition to this AUP's terms.

Institutional Analysis

Institutional analysis available with Professional

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Start Professional free trial
High — 3 provisions
Medium — 6 provisions
Low — 1 provision

Monitoring

Pika has updated this document before.

Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →

Professional Governance Intelligence

Need provision-level monitoring and regulatory mapping?

Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.

Start Professional free trial

Cross-platform context

See how other platforms handle Child Protection and CSAM Prohibition and similar clauses.

Compare across platforms →

Mapped Governance Frameworks

California AB 2013 AI Training Data Transparency
US-CA
View official text ↗
CFAA
United States Federal
View official text ↗
ePrivacy Directive
European Union
View official text ↗
FTC Act Section 5
United States Federal
View official text ↗
Archival ProvenanceSource & Archival Record
Last Captured May 12, 2026 05:48 UTC
Capture Method Automated scheduled archival capture
Document ID CA-D-000844
Version ID CA-V-002493
SHA-256 92bad40d013fd2537f79ca86eb145a9035db9747ea0ac0f350f5faea0847464f
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Hash verified

Governance Monitoring

Monitor governance changes across the platforms you rely on.

Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.

Create free account Compare plans