Unless you turn off the opt-out setting in your project, Google states it may use the prompts and responses you send through the API to train and improve its AI models.
This analysis describes what Google AI Studio's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision establishes a default data-use posture that applies to all API traffic until a developer affirmatively changes a project setting. Developers handling personal data from end users should assess whether this default is consistent with their data protection obligations before deploying.
Developers and, by extension, end users of developer-built applications may have their API inputs and outputs used for Google model improvement unless the developer has configured the opt-out setting. The provision places the obligation to opt out on the developer, not the end user.
How other platforms handle this
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
After registration, you may create, upload or transmit files, documents, videos, images, data or information as part of your use of the Service (collectively, "User Content"). This includes any inputs you provide to our AI-powered support tools and outputs generated in response to your inputs. User ...
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
Monitoring
Google AI Studio has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"When you use the Gemini API via Google AI Studio, Google uses the content you submit to and generate from the API ("API Data") to provide, improve, and develop Google products and services, including Google's AI models, unless you opt out using the API data settings for your project.— Excerpt from Google AI Studio's Gemini API Terms of Service
1) REGULATORY LANDSCAPE: This provision engages GDPR Articles 5(1)(b) (purpose limitation), 6 (lawful basis), and 13/14 (transparency), as data submitted through the API may include personal data from end users. If developers are established in the EU/EEA or process data of EU/EEA residents, the default opt-in posture for model training may require a lawful basis assessment. The provision also engages CCPA disclosure requirements where personal information of California residents is included in API traffic. The relevant EU enforcement authorities are national data protection authorities and the European Data Protection Board; in the US, the FTC has jurisdiction over deceptive data practices. 2) GOVERNANCE EXPOSURE: High. The default-on data-use configuration for model training is the most significant compliance exposure in this document. Developers who process personal data of end users without auditing this setting may be operating in a manner inconsistent with their stated privacy policies or applicable law. The risk is highest for developers in EU/EEA jurisdictions and those processing sensitive categories of data. 3) JURISDICTION FLAGS: EU/EEA developers face the highest exposure given GDPR purpose limitation and transparency requirements. California-resident user data implicates CCPA service provider provisions and whether Google qualifies as a service provider or a third party under the developer's data flows. Illinois and New York do not create specific additional flags for this provision absent biometric or health data processing. 4) CONTRACT AND VENDOR IMPLICATIONS: Procurement teams should assess whether the API Terms alone constitute a sufficient data processing agreement under GDPR Article 28, or whether a separate DPA must be executed. The provision does not itself constitute a controller-processor delineation, and developers should not assume the terms resolve that question. B2B platforms re-selling API-based services should evaluate whether downstream DPA obligations exist. 5) COMPLIANCE CONSIDERATIONS: Compliance teams should document the current opt-out status for each active API project. Privacy policies for end-user-facing applications should be reviewed to determine whether Google's use of API data for model improvement is adequately disclosed. Data mapping exercises should include API traffic as a data flow to Google.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision establishes a default data-use posture that applies to all API traffic until a developer affirmatively changes a project setting. Developers handling personal data from end users should assess whether this default is consistent with their data protection obligations before deploying.
Developers and, by extension, end users of developer-built applications may have their API inputs and outputs used for Google model improvement unless the developer has configured the opt-out setting. The provision places the obligation to opt out on the developer, not the end user.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Google AI Studio.