Google Gemini · Google Generative AI Prohibited Use Policy · View original document ↗

Minor Protection and CSAM Absolute Prohibition

High severity High confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Recent governance activity Google Gemini recorded 8 documented changes in the last 30 days.
Start monitoring updates
Monitor governance changes for Google Gemini Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Generating any content that sexualizes minors or could be used to exploit or harm children is absolutely prohibited under this policy, with no exceptions.

This analysis describes what Google Gemini's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This is an absolute prohibition with no carve-outs, and violation constitutes both a policy breach justifying immediate termination and potential criminal liability under applicable law in most jurisdictions.

Consumer impact (what this means for users)

Any user who generates content sexualizing or exploiting minors through Gemini faces immediate account termination and may be subject to criminal prosecution under laws such as 18 U.S.C. Section 2256 in the US and equivalent statutes internationally.

How other platforms handle this

Meta Medium

Our Products are not directed at children. You must be at least 13 years old to use our Products. If you are under the age of 18, you must have the permission of your parent or legal guardian to use our Products. You represent that you are 13 years of age or older, that you have the legal right to e...

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

Mistral AI Medium

Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...

See all platforms with this clause type →

Monitoring

Google Gemini has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Don't use our services to generate content that sexualizes minors or that could be used to exploit or harm children.

— Excerpt from Google Gemini's Google Generative AI Prohibited Use Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: This provision engages 18 U.S.C. Section 2256 and related CSAM statutes in the US, EU Directive 2011/93/EU on combating sexual abuse and exploitation of children, COPPA, and mandatory reporting obligations under 18 U.S.C. Section 2258A (requiring electronic service providers to report apparent CSAM to NCMEC). Enforcement authorities include the DOJ, FBI, Homeland Security Investigations, NCMEC, and Europol. Google as a service provider has independent mandatory reporting obligations that are not contingent on user agreement to this provision. GOVERNANCE EXPOSURE: High. This provision reflects a legal baseline obligation rather than a purely contractual choice. Enterprises deploying Gemini in consumer-facing contexts must implement technical and procedural safeguards to prevent CSAM generation attempts, including age verification, prompt filtering, and output monitoring. JURISDICTION FLAGS: CSAM prohibitions are among the most uniformly enforced across jurisdictions globally. All major legal systems treat CSAM as a criminal matter, and no jurisdictional carve-outs apply. Mandatory reporting obligations exist in the US, EU, UK, Canada, Australia, and most other major jurisdictions. CONTRACT AND VENDOR IMPLICATIONS: API licensees have independent obligations to report CSAM to NCMEC (in the US) and should ensure their agreements with sub-processors and downstream developers include equivalent mandatory reporting and zero-tolerance provisions. This clause should be treated as non-negotiable in any API licensing arrangement. COMPLIANCE CONSIDERATIONS: Compliance teams should implement zero-tolerance operational controls for CSAM, including automated detection, immediate escalation protocols, mandatory reporting procedures to NCMEC or equivalent national bodies, and regular audits of content filtering systems. This provision requires a dedicated incident response plan.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has jurisdiction over online services that fail to adequately protect minors under COPPA and related consumer protection frameworks.
    File a complaint →
  • State AG
    State attorneys general have jurisdiction over child protection violations and can bring criminal and civil actions for CSAM-related offenses under state law.
    File a complaint →

Applicable regulations

CFAA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Google Generative AI Prohibited Use Policy
Entity
Google Gemini
Document last updated
May 12, 2026
Tracking information
First tracked
April 18, 2026
Last verified
May 12, 2026
Record ID
CA-P-011357
Document ID
CA-D-00325
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
63e816d10b250e0548b988099f94a40d1e970e1f90744d59ee3d2053af23c1a7
Analysis generated
April 18, 2026 12:15 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Google Gemini
Document: Google Generative AI Prohibited Use Policy
Record ID: CA-P-011357
Captured: 2026-04-18 12:15:17 UTC
SHA-256: 63e816d10b250e05…
URL: https://conductatlas.com/platform/google-gemini/google-generative-ai-prohibited-use-policy/minor-protection-and-csam-absolute-prohibition/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Google Gemini's Minor Protection and CSAM Absolute Prohibition clause do?

This is an absolute prohibition with no carve-outs, and violation constitutes both a policy breach justifying immediate termination and potential criminal liability under applicable law in most jurisdictions.

How does this clause affect you?

Any user who generates content sexualizing or exploiting minors through Gemini faces immediate account termination and may be subject to criminal prosecution under laws such as 18 U.S.C. Section 2256 in the US and equivalent statutes internationally.

Is ConductAtlas affiliated with Google Gemini?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Google Gemini.