Be built and tested for safety. We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. In safety-critical areas, we will develop and maintain robust feedback mechanisms so that users and others can flag concerns, and will work to incorporate safety considerations at all stages of product development.
Without specified safety testing standards, audit rights, or public disclosure of safety test results, this commitment cannot be independently verified by consumers, regulators, or enterprise customers.
Google's AI Principles set out aspirational commitments about what kinds of AI the company will and won't build, which indirectly affects every person who uses Google products — from Search to Gemini to Google Workspace. However, the document creates no legally enforceable rights for consumers: there is no opt-out mechanism, no user complaint pathway, and no independent auditor verifying compliance with the stated principles. You can file a complaint with the FTC at reportfraud.ftc.gov if you believe Google's AI practices contradict its publicly stated principles.