Track 1 platform and get the weekly governance digest. No credit card required.
This page describes what the document states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability may vary by jurisdiction. Methodology
This is Pika's rulebook for how users are allowed to use its AI video generation tool. It prohibits a wide range of content including nonconsensual deepfakes, child sexual abuse material, impersonation of real people without consent, and use of the service for political advertising or professional advice. If you upload images of real people, you are responsible for having their consent, and you must disclose when any output is AI-generated or artificially manipulated.
This Acceptable Use Policy (AUP), published by Mellis, Inc. (operating as Pika) and last updated May 16, 2025, governs user conduct and content on Pika's AI video generation service, operating as a supplement to the Terms of Service and binding on all users as a condition of access. The AUP asserts that users are solely responsible for all inputs and outputs, prohibits a comprehensive list of uses including nonconsensual deepfake sexual content, child exploitation material, impersonation, political campaigning, unauthorized advertising, and weapons-related content, and reserves to Pika sole discretion to monitor use, remove content, suspend or terminate accounts, and report violations to law enforcement including NCMEC. The AUP includes a broadly worded catchall provision authorizing removal of content or access that Pika determines, in its sole discretion, poses a risk to safety, integrity, legal compliance, or proper functioning, even if not expressly prohibited; this clause grants significant unilateral enforcement authority, and its practical scope relative to applicable consumer protection or due process requirements may vary by jurisdiction. The policy engages AI-specific regulatory frameworks including laws applicable to design, development, deployment, and use of AI technology, as well as privacy and data protection laws; the explicit reference to digital replicas and consent requirements implicates emerging state-level AI persona legislation such as those enacted in California, Tennessee, and Texas. Users in jurisdictions with established AI governance frameworks, deepfake disclosure mandates, or rights of publicity statutes may have legal protections that operate independently of or in addition to this AUP's terms.
Institutional analysis available with Professional
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Start Professional free trialMonitoring
Pika has updated this document before.
Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
Professional Governance Intelligence
Need provision-level monitoring and regulatory mapping?
Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.
Start Professional free trialCross-platform context
See how other platforms handle Child Protection and CSAM Prohibition and similar clauses.
Compare across platforms →Governance Monitoring
Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.