Google commits to only building AI where it believes the overall benefits to society outweigh the risks — a broad internal cost-benefit test applied before developing new AI applications.
This clause means Google claims to weigh societal harm before launching AI products, but because the assessment is internal and unpublished, consumers have no transparency into how this test is applied to products they use every day.
Cross-platform context
See how other platforms handle Social Benefit Test and similar clauses.
Compare across platforms →This provision establishes a self-defined, internally assessed standard with no independent verification mechanism, meaning the public cannot confirm whether any given Google AI product actually passed this test.
REGULATORY FRAMEWORK: This provision aligns with the EU AI Act's conformity assessment requirements for high-risk AI systems (Arts. 9-16), which mandate documented risk assessments; NIST AI Risk Management Framework (Govern, Map, Measure, Manage functions); and OECD AI Principles (Principle 1.1 on inclusive growth and sustainable development). It also resonates with the UK AI Safety Institute's evaluation frameworks. No single regulator directly enforces this specific provision.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.