You are not allowed to create AI videos using a real person's face, voice, or identity without getting their written permission first, and you cannot use the platform to make deceptive deepfake content.
This analysis describes what Synthesia's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This restriction places the legal and ethical burden of obtaining consent directly on the customer, and failure to comply constitutes a breach of agreement that can trigger immediate suspension and indemnification obligations.
Customers who create avatar content using real human likenesses without documented written consent are in breach of the agreement and potentially liable for all resulting legal claims, including those brought against Synthesia by affected individuals.
How other platforms handle this
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...
You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.
Monitoring
Synthesia has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"You must not use the Services to create content that: uses the likeness, voice, or personal data of any individual without their prior written consent; impersonates any person or entity in a misleading or deceptive manner; creates deepfakes or other synthetic media intended to deceive viewers about the identity or statements of real individuals.— Excerpt from Synthesia's Synthesia Terms of Service
REGULATORY LANDSCAPE: This provision directly engages the EU AI Act, which imposes transparency requirements on AI-generated content including deepfakes, and requires disclosure to recipients that content is AI-generated. GDPR Article 9 is relevant where voice or facial data constituting biometric data is processed without explicit consent. In the US, Illinois BIPA, California's AB 602 and AB 730 (synthetic media statutes), and Texas CUBI create specific consent and disclosure requirements for biometric and synthetic media use. The FTC Act applies to deceptive use of synthetic media in commerce. GOVERNANCE EXPOSURE: High. The consent requirement for real-person likenesses and voices is legally significant given the expanding regulatory landscape for synthetic media. Customers producing content at scale involving real individuals (corporate communications, training videos with real employees) must implement robust consent management processes. Non-compliance creates both contractual breach and direct regulatory exposure. JURISDICTION FLAGS: Illinois BIPA creates a private right of action for biometric data collection without written consent, making it the highest-risk US jurisdiction for customers collecting voice or facial biometric data. EU customers must comply with GDPR explicit consent requirements for biometric data. Several US states have enacted or are enacting synthetic media and deepfake laws that independently require consent, regardless of Synthesia's contractual terms. CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers deploying Synthesia for internal communications, training, or marketing involving real employees or individuals should implement written consent documentation workflows. This is a standard due diligence requirement for any AI avatar deployment at scale and should be included in HR and content production policies. COMPLIANCE CONSIDERATIONS: Customers should develop and implement a consent management framework for all avatar content creation involving real human likenesses, including template consent forms, version control for consent records, and withdrawal processes. Legal teams should monitor synthetic media legislation in relevant operating jurisdictions as this regulatory area is evolving rapidly.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This restriction places the legal and ethical burden of obtaining consent directly on the customer, and failure to comply constitutes a breach of agreement that can trigger immediate suspension and indemnification obligations.
Customers who create avatar content using real human likenesses without documented written consent are in breach of the agreement and potentially liable for all resulting legal claims, including those brought against Synthesia by affected individuals.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Synthesia.