Track 1 platform and get the weekly governance digest. No credit card required.
This page describes what the document states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability may vary by jurisdiction. Methodology
This document sets out the rules for using Google's generative AI services, including the Gemini app and Gemini API. The most significant provisions prohibit using Gemini to generate harmful, deceptive, or illegal content and restrict use in high-stakes contexts like medical diagnosis or legal advice without human oversight. If you use Gemini for a business or application, review the prohibited use categories carefully, as violations can result in access being suspended or terminated.
This document is Google's Generative AI Additional Terms of Service and Use Policy, governing use of Google's generative AI products including Gemini apps and the Gemini API, and operating as a supplement to the Google Terms of Service. The agreement states that users must not use the services to generate content that facilitates violence, generates misinformation, creates CSAM, enables cyberattacks, or violates applicable law, and that Google reserves the right to suspend or terminate access for policy violations. The use policy enumerates specific prohibited categories of content and use cases, including content designed to deceive, content that sexualizes minors, and automated pipelines without human oversight in high-stakes domains such as healthcare and legal advice, which distinguishes this document operationally from general-purpose ToS frameworks by incorporating AI-specific safety constraints on outputs. The document engages the EU AI Act, GDPR, the FTC Act, and COPPA given the scope of AI-generated content, automated decision-making disclosures, and restrictions on use involving minors; applicability of specific regulatory obligations depends on user jurisdiction and deployment context. Compliance teams deploying Gemini API in commercial or regulated contexts should evaluate whether use-case-specific restrictions in this policy create contractual risk, particularly in healthcare, legal, financial, and educational applications where the policy explicitly limits reliance on AI outputs.
Institutional analysis available with Professional
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Start Professional free trialMonitoring
Google Gemini has updated this document before.
Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
Professional Governance Intelligence
Need provision-level monitoring and regulatory mapping?
Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.
Start Professional free trialCross-platform context
See how other platforms handle AI Output Accuracy Disclaimer and similar clauses.
Compare across platforms →Governance Monitoring
Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.