The policy prohibits using Stability AI's models to create synthetic media, including realistic images, video, or audio of real people, that is designed to deceive viewers about its artificial origin or to misrepresent a real person's statements or actions.
This analysis describes what Stability AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This prohibition covers a category of AI-generated content that is increasingly the subject of specific legislation in multiple jurisdictions, and it establishes that creating non-consensual intimate imagery or politically deceptive deepfakes using Stability AI's tools violates the policy.
Interpretive note: The exact verbatim text was unavailable due to HTML truncation; this provision's scope and specific carve-outs cannot be confirmed without access to the full policy text.
Users who generate realistic synthetic media of real individuals without their consent, or who create content designed to deceive about its AI origin in contexts where this causes harm, may have their access to Stability AI's services terminated; this clause also has implications for operators building applications that could be used to produce such content at scale.
How other platforms handle this
Don't claim to be human when directly and sincerely asked, use AI to deceive people about its fundamental nature, or impersonate real people or organizations in misleading ways.
Fraud and Deception. Attempting to defraud or misrepresent yourself or your services to others, including impersonating individuals or entities. Engaging in phishing, pharming, or other deceptive activities.
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Monitoring
Stability AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
(1) REGULATORY LANDSCAPE: This provision engages a rapidly expanding body of deepfake-specific legislation including the DEEPFAKES Accountability Act (proposed federal, US), California AB 602 and AB 730 (non-consensual intimate deepfakes and election deepfakes), Texas SB 751, Virginia Code 18.2-386.2, and the UK Online Safety Act 2023 which criminalizes non-consensual intimate image sharing including AI-generated material. The EU AI Act prohibits certain AI-generated content manipulation and requires disclosure of AI-generated content in specified contexts. The FTC's authority over deceptive practices is also engaged where AI-generated content is used in commercial communications. (2) GOVERNANCE EXPOSURE: High for operators deploying Stability AI models in consumer-facing applications, particularly social media tools, video editing platforms, and marketing technology, where end users may generate non-consensual intimate imagery or election-related deepfakes. State-level statutory damages provisions in California and other jurisdictions create direct financial exposure for platform operators. (3) JURISDICTION FLAGS: California, Texas, Virginia, Georgia, and New York have enacted or are advancing deepfake-specific statutes. UK law now criminalizes non-consensual intimate synthetic images. EU member states are implementing DSA and AI Act provisions requiring synthetic media disclosure. Operators with users in these jurisdictions face heightened exposure. (4) CONTRACT AND VENDOR IMPLICATIONS: API customers building image or video generation tools should assess whether their product design enables non-consensual deepfake creation and implement technical and policy safeguards. Procurement teams should evaluate whether their existing terms of service adequately prohibit this use and whether their content moderation infrastructure is sufficient. (5) COMPLIANCE CONSIDERATIONS: Operators should implement content provenance mechanisms such as C2PA watermarking or equivalent disclosure tools to satisfy emerging regulatory requirements for AI-generated content identification. Legal teams should map their user base against applicable deepfake statutes to identify jurisdictions requiring specific disclosures or prohibitions.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This prohibition covers a category of AI-generated content that is increasingly the subject of specific legislation in multiple jurisdictions, and it establishes that creating non-consensual intimate imagery or politically deceptive deepfakes using Stability AI's tools violates the policy.
Users who generate realistic synthetic media of real individuals without their consent, or who create content designed to deceive about its AI origin in contexts where this causes harm, may have their access to Stability AI's services terminated; this clause also has implications for operators building applications that could be used to produce such content at scale.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Stability AI.