7 Total
4 High severity
2 Medium severity
1 Low severity
Summary

This document sets out the rules for using Google's generative AI services, including the Gemini app and Gemini API. The most significant provisions prohibit using Gemini to generate harmful, deceptive, or illegal content and restrict use in high-stakes contexts like medical diagnosis or legal advice without human oversight. If you use Gemini for a business or application, review the prohibited use categories carefully, as violations can result in access being suspended or terminated.

Technical / Legal Breakdown

This document is Google's Generative AI Additional Terms of Service and Use Policy, governing use of Google's generative AI products including Gemini apps and the Gemini API, and operating as a supplement to the Google Terms of Service. The agreement states that users must not use the services to generate content that facilitates violence, generates misinformation, creates CSAM, enables cyberattacks, or violates applicable law, and that Google reserves the right to suspend or terminate access for policy violations. The use policy enumerates specific prohibited categories of content and use cases, including content designed to deceive, content that sexualizes minors, and automated pipelines without human oversight in high-stakes domains such as healthcare and legal advice, which distinguishes this document operationally from general-purpose ToS frameworks by incorporating AI-specific safety constraints on outputs. The document engages the EU AI Act, GDPR, the FTC Act, and COPPA given the scope of AI-generated content, automated decision-making disclosures, and restrictions on use involving minors; applicability of specific regulatory obligations depends on user jurisdiction and deployment context. Compliance teams deploying Gemini API in commercial or regulated contexts should evaluate whether use-case-specific restrictions in this policy create contractual risk, particularly in healthcare, legal, financial, and educational applications where the policy explicitly limits reliance on AI outputs.

Institutional Analysis

Institutional analysis available with Professional

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Start Professional free trial
High — 4 provisions
Medium — 2 provisions
Low — 1 provision

Monitoring

Google Gemini has updated this document before.

Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →

Professional Governance Intelligence

Need provision-level monitoring and regulatory mapping?

Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.

Start Professional free trial

Cross-platform context

See how other platforms handle AI Output Accuracy Disclaimer and similar clauses.

Compare across platforms →

Mapped Governance Frameworks

California AB 2013 AI Training Data Transparency
US-CA
View official text ↗
DSA
European Union
View official text ↗
EFTA / Reg E
United States Federal
View official text ↗
Archival ProvenanceSource & Archival Record
Last Captured April 18, 2026 07:55 UTC
Capture Method Automated scheduled archival capture
Document ID CA-D-000325
Version ID CA-V-000620
SHA-256 01748d5e9112f30f5e23aacc843263b42d974d222e60d46b00f0d74bc61262b0
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Hash verified

Governance Monitoring

Monitor governance changes across the platforms you rely on.

Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.

Create free account Compare plans