Google Gemini · Google Generative AI Prohibited Use Policy · View original document ↗

Deceptive Content Prohibition

Medium severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Recent governance activity Google Gemini recorded 8 documented changes in the last 30 days.
Start monitoring updates
Monitor governance changes for Google Gemini Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Gemini may not be used to create fake reviews, false information intended to mislead, or content impersonating real people or organizations.

This analysis describes what Google Gemini's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision covers a broad range of AI-assisted deception use cases that have attracted significant regulatory and enforcement attention, including fake review generation and AI-powered impersonation.

Interpretive note: The boundary between permissible creative or illustrative content and 'content designed to deceive' is not precisely defined and may require case-by-case evaluation depending on context and distribution method.

Consumer impact (what this means for users)

Users who use Gemini to generate fake reviews, spread misinformation, or impersonate others violate this policy and risk account termination, in addition to potential legal liability under applicable consumer protection and fraud laws.

How other platforms handle this

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

Midjourney Medium

Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...

OpenAI Medium

Don't claim to be human when directly and sincerely asked, use AI to deceive people about its fundamental nature, or impersonate real people or organizations in misleading ways.

See all platforms with this clause type →

Monitoring

Google Gemini has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Don't use our services to generate content designed to deceive or defraud, including fake reviews, misinformation, or impersonation of individuals or entities.

— Excerpt from Google Gemini's Google Generative AI Prohibited Use Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: This provision engages the FTC Act's prohibition on deceptive practices, including FTC rules on endorsements and testimonials that address fake reviews specifically, the EU Digital Services Act's provisions on systemic risks from AI-generated misinformation (applicable to very large online platforms), and applicable state fraud and impersonation statutes. The FTC is the primary enforcement authority in the US; the European Commission and national Digital Services Coordinators apply in the EU. GOVERNANCE EXPOSURE: Medium. The FTC has actively pursued enforcement actions against fake review generation, and AI-assisted fake review production is a specifically identified area of regulatory concern. Enterprises using Gemini in marketing or content generation workflows should implement controls to prevent use of outputs as fake testimonials or reviews. JURISDICTION FLAGS: The EU DSA imposes specific obligations on platforms to address AI-generated misinformation. Several US states, including California and New York, have enacted or proposed specific restrictions on AI-generated deceptive content and impersonation. California's law on AI-generated political content (AB 2655) and similar state laws may create additional compliance obligations. CONTRACT AND VENDOR IMPLICATIONS: Enterprises using Gemini for customer-facing content generation should contractually prohibit downstream use of outputs as fake reviews or deceptive endorsements. Marketing and advertising teams should receive specific guidance on this restriction. COMPLIANCE CONSIDERATIONS: Compliance teams should audit marketing and content generation workflows for potential fake review or impersonation use cases, implement output review procedures for any content published under human or brand names, and monitor FTC and state AG enforcement activity in AI-generated deceptive content.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has active jurisdiction over fake review generation and AI-assisted deceptive practices under the FTC Act and its Endorsement Guides.
    File a complaint →
  • State AG
    State attorneys general have jurisdiction over consumer fraud, impersonation, and deceptive business practices under state consumer protection laws.
    File a complaint →

Applicable regulations

CFAA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Google Generative AI Prohibited Use Policy
Entity
Google Gemini
Document last updated
May 12, 2026
Tracking information
First tracked
April 18, 2026
Last verified
May 12, 2026
Record ID
CA-P-011359
Document ID
CA-D-00325
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
63e816d10b250e0548b988099f94a40d1e970e1f90744d59ee3d2053af23c1a7
Analysis generated
April 18, 2026 12:15 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Google Gemini
Document: Google Generative AI Prohibited Use Policy
Record ID: CA-P-011359
Captured: 2026-04-18 12:15:17 UTC
SHA-256: 63e816d10b250e05…
URL: https://conductatlas.com/platform/google-gemini/google-generative-ai-prohibited-use-policy/deceptive-content-prohibition/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Google Gemini's Deceptive Content Prohibition clause do?

This provision covers a broad range of AI-assisted deception use cases that have attracted significant regulatory and enforcement attention, including fake review generation and AI-powered impersonation.

How does this clause affect you?

Users who use Gemini to generate fake reviews, spread misinformation, or impersonate others violate this policy and risk account termination, in addition to potential legal liability under applicable consumer protection and fraud laws.

Is ConductAtlas affiliated with Google Gemini?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Google Gemini.