We will not design or deploy AI in the following application areas: Technologies that cause or are likely to cause overall harm. Where there is material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people. Technologies that gather or use information for surveillance violating internationally accepted norms. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
This is the most concrete commitment in the document — it defines a floor below which Google says it will not go — but it is self-policed with no external verification.
Google's AI Principles set out aspirational commitments about what kinds of AI the company will and won't build, which indirectly affects every person who uses Google products — from Search to Gemini to Google Workspace. However, the document creates no legally enforceable rights for consumers: there is no opt-out mechanism, no user complaint pathway, and no independent auditor verifying compliance with the stated principles. You can file a complaint with the FTC at reportfraud.ftc.gov if you believe Google's AI practices contradict its publicly stated principles.