8 Total
2 High severity
5 Medium severity
1 Low severity
Summary

This document sets the rules for developers who use Google's Gemini API and AI Studio to build applications and access AI models. By default, Google states it may use the prompts, inputs, and outputs developers send through the API to improve its AI models, unless the developer actively turns off this setting in the API console. Developers who build apps using this API and handle data from real users should review their own privacy obligations and check whether their project's data-use setting is configured as intended.

Technical / Legal Breakdown

This document governs developer access to and use of the Gemini API and Google AI Studio, operating as Additional Terms of Service that supplement the Google APIs Terms of Service and, where applicable, the Google Cloud Platform Terms of Service. The agreement states that developers grant Google a license to use submitted content to provide, improve, and develop Google products and services, and the terms authorize Google to use API inputs and outputs for model improvement unless developers opt out via a designated setting; the agreement also states that developers are responsible for ensuring end-user consent where required and for compliance with applicable law when deploying API-powered applications. Notably, the terms assert a broad content license for model improvement that applies by default and requires affirmative opt-out rather than opt-in, and the agreement disclaims warranties and caps Google's liability at amounts paid in the prior twelve months, provisions that are common in developer API agreements but whose enforceability may be constrained under applicable law depending on jurisdiction. The document engages GDPR and related EU data protection frameworks, the EU AI Act, CCPA, and general FTC unfair and deceptive practices authority, particularly given the default data-use posture and the AI-specific obligations applicable to developers who deploy Gemini models in end-user-facing products; compliance exposure depends significantly on whether developers are classified as data controllers or processors under applicable privacy law and on the use-case risk tier under the EU AI Act.

Institutional Analysis

Institutional analysis available with Professional

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Start Professional free trial
High — 2 provisions
Medium — 5 provisions
Low — 1 provision

Monitoring

Google AI Studio has updated this document before.

Watcher includes same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →

Professional Governance Intelligence

Need provision-level monitoring and regulatory mapping?

Professional includes governance timelines, compliance memos, audit-ready analysis, and full provision tracking.

Start Professional free trial

Cross-platform context

See how other platforms handle Default Model Training on API Inputs and Outputs and similar clauses.

Compare across platforms →

Mapped Governance Frameworks

California AB 2013 AI Training Data Transparency
US-CA
View official text ↗
Archival ProvenanceSource & Archival Record
Last Captured May 12, 2026 06:36 UTC
Capture Method Automated scheduled archival capture
Document ID CA-D-000794
Version ID CA-V-002523
SHA-256 85a8f65a2df7e432f7689a5008cfac888c931520f7b08027a68335239fca1f31
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Hash verified

Governance Monitoring

Monitor governance changes across the platforms you rely on.

Structured alerts for policy changes, governance events, and provision updates across 318+ platforms.

Create free account Compare plans