We want to be clear that while we are not developing AI for use in weapons, we will continue to work with governments and the military on many other applications, including cybersecurity, training, military recruitment, veterans' healthcare, and healthcare more generally. We recognize that the boundaries of this guidance are hard to define, and these use cases will continue to need scrutiny.
This self-acknowledged ambiguity creates significant governance risk: Google concedes that the boundary between permitted military AI work and prohibited weapons-adjacent AI is unclear, leaving substantial room for dual-use concerns without any independent adjudication mechanism.
Google's AI Principles set out aspirational commitments about what kinds of AI the company will and won't build, which indirectly affects every person who uses Google products — from Search to Gemini to Google Workspace. However, the document creates no legally enforceable rights for consumers: there is no opt-out mechanism, no user complaint pathway, and no independent auditor verifying compliance with the stated principles. You can file a complaint with the FTC at reportfraud.ftc.gov if you believe Google's AI practices contradict its publicly stated principles.