8 Total
4 High severity
4 Medium severity
0 Low severity
Summary

These are Google's extra rules for using its AI products like Gemini, covering what you can and cannot ask the AI to create or do. The most important thing to know is that Google explicitly warns that Gemini's responses may be inaccurate or inappropriate, and Google takes no responsibility for those outputs — meaning if you act on bad AI advice, that is your problem. You should avoid sharing sensitive personal information in your prompts, since your conversations with Gemini may be reviewed by Google employees to improve the AI.

Technical Summary

This document is Google's Generative AI Additional Terms of Service and Use Policy, governing user access to and use of Google's generative AI products including Gemini Apps, governing law being Google's general Terms of Service as supplemented by these additional terms. The most significant obligations include prohibitions on generating certain categories of content (CSAM, content facilitating real-world violence, deceptive synthetic media, malware), restrictions on use of generated outputs to harm individuals, and requirements that users not attempt to circumvent built-in safety filters. Notable provisions include an explicit acknowledgment that AI outputs 'may be inaccurate or inappropriate' with Google disclaiming responsibility for such outputs, and a broad reservation of rights to restrict or terminate access to generative AI features without notice — a broader termination right than found in standard SaaS agreements. The policy engages GDPR (data processing of prompts and outputs), the EU AI Act (high-risk AI system obligations and transparency requirements for AI-generated content), COPPA (minors' use restrictions), and FTC Act Section 5 (deceptive AI-generated content). Material compliance considerations include obligations around AI-generated synthetic media disclosures under emerging state laws (California AB 602, Texas, Georgia), and the policy's interaction with the EU AI Act's prohibited practices provisions for generative AI systems deployed at scale.

Evidence Provenance
Captured April 18, 2026 07:55 UTC
Document ID CA-D-000325
Version ID CA-V-000620
Wayback Machine View archived versions →
SHA-256 01748d5e9112f30f5e23aacc843263b42d974d222e60d46b00f0d74bc61262b0
✓ Snapshot stored ✓ Text extracted ✓ Change verified ✓ Cryptographically signed
Institutional Analysis

🔒 Institutional analysis locked

Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.

Upgrade to Professional — $149/mo
Change Timeline
View full version history (0 captures) →
High Severity — 4 provisions
Medium Severity — 4 provisions

Cross-platform context

See how other platforms handle AI Output Accuracy Disclaimer and similar clauses.

Compare across platforms →

Applicable Regulations

EU AI Act
European Union
CCPA/CPRA
California, USA
CFAA
United States Federal
CAN-SPAM
United States Federal
DMA
European Union
DSA
European Union
GDPR
European Union
UK GDPR
United Kingdom