Gemini may not be used to create fake reviews, false information intended to mislead, or content impersonating real people or organizations.
This analysis describes what Google Gemini's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision covers a broad range of AI-assisted deception use cases that have attracted significant regulatory and enforcement attention, including fake review generation and AI-powered impersonation.
Interpretive note: The boundary between permissible creative or illustrative content and 'content designed to deceive' is not precisely defined and may require case-by-case evaluation depending on context and distribution method.
Users who use Gemini to generate fake reviews, spread misinformation, or impersonate others violate this policy and risk account termination, in addition to potential legal liability under applicable consumer protection and fraud laws.
How other platforms handle this
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...
Don't claim to be human when directly and sincerely asked, use AI to deceive people about its fundamental nature, or impersonate real people or organizations in misleading ways.
Monitoring
Google Gemini has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Don't use our services to generate content designed to deceive or defraud, including fake reviews, misinformation, or impersonation of individuals or entities.— Excerpt from Google Gemini's Google Generative AI Prohibited Use Policy
REGULATORY LANDSCAPE: This provision engages the FTC Act's prohibition on deceptive practices, including FTC rules on endorsements and testimonials that address fake reviews specifically, the EU Digital Services Act's provisions on systemic risks from AI-generated misinformation (applicable to very large online platforms), and applicable state fraud and impersonation statutes. The FTC is the primary enforcement authority in the US; the European Commission and national Digital Services Coordinators apply in the EU. GOVERNANCE EXPOSURE: Medium. The FTC has actively pursued enforcement actions against fake review generation, and AI-assisted fake review production is a specifically identified area of regulatory concern. Enterprises using Gemini in marketing or content generation workflows should implement controls to prevent use of outputs as fake testimonials or reviews. JURISDICTION FLAGS: The EU DSA imposes specific obligations on platforms to address AI-generated misinformation. Several US states, including California and New York, have enacted or proposed specific restrictions on AI-generated deceptive content and impersonation. California's law on AI-generated political content (AB 2655) and similar state laws may create additional compliance obligations. CONTRACT AND VENDOR IMPLICATIONS: Enterprises using Gemini for customer-facing content generation should contractually prohibit downstream use of outputs as fake reviews or deceptive endorsements. Marketing and advertising teams should receive specific guidance on this restriction. COMPLIANCE CONSIDERATIONS: Compliance teams should audit marketing and content generation workflows for potential fake review or impersonation use cases, implement output review procedures for any content published under human or brand names, and monitor FTC and state AG enforcement activity in AI-generated deceptive content.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision covers a broad range of AI-assisted deception use cases that have attracted significant regulatory and enforcement attention, including fake review generation and AI-powered impersonation.
Users who use Gemini to generate fake reviews, spread misinformation, or impersonate others violate this policy and risk account termination, in addition to potential legal liability under applicable consumer protection and fraud laws.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Google Gemini.