Google · Google AI Principles

Prohibited AI Applications

Medium severity
Share 𝕏 Share in Share

What it is

Google publicly commits to never building AI for weapons of mass destruction, illegal surveillance, or other technologies that violate international human rights norms.

Consumer impact (what this means for users)

This clause means Google has publicly promised not to create AI tools that could be weaponized against civilians or used for illegal mass surveillance — but the lack of external enforcement means consumers cannot compel compliance through legal action.

How other platforms handle this

Anthropic Medium

Anthropic may enter into contracts with certain governmental customers that tailor use restrictions to that customer's public mission and legal authorities if, in Anthropic's judgment, the contractual use restrictions and applicable safeguards are adequate to mitigate the potential harms addressed b...

Google Gemini Medium

If you use Gemini as part of a Google Workspace account (such as through your employer or school), your use is governed by your organization's Google Workspace agreement, which may have different data handling terms than those described in this notice. Your organization's administrator can configure...

OpenAI Medium

Operators can expand ChatGPT's defaults for users, such as allowing ChatGPT to produce adult-only content that it wouldn't produce by default. Operators can restrict ChatGPT's defaults for users, such as preventing ChatGPT from producing content that isn't related to their core use case. Operators c...

See all platforms with this clause type →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

This provision is one of the most specific public commitments made by a major AI company about what it will not build — but it relies on undefined terms like 'internationally accepted norms' which leave significant interpretive room.

View original clause language
We will not design or deploy AI in the following application areas: Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people. Technologies that gather or use information for surveillance violating internationally accepted norms. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Institutional analysis (Compliance & legal intelligence)

REGULATORY FRAMEWORK: This provision implicates the EU AI Act (Regulation 2024/1689) Art. 5, which prohibits certain AI practices outright (social scoring, real-time biometric surveillance in public spaces); International Humanitarian Law applicable to autonomous weapons systems; FTC Act Section 5 (unfair or deceptive practices if commitments are publicly made and violated); and US Export Administration Regulations (EAR) governing dual-use technology exports. The EU AI Office and FTC hold primary enforcement authority in their respective jurisdictions.

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    FTC has authority to enforce against deceptive practices if Google's actual AI deployments materially contradict these public commitments under FTC Act Section 5.
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union

Provision details

Document information
Document
Google AI Principles
Entity
Google
Document last updated
March 24, 2026
Tracking information
First tracked
April 27, 2026
Last verified
April 27, 2026
Record ID
CA-P-002363
Document ID
CA-D-00016
Evidence Provenance
Source URL
Wayback Machine
SHA-256
01eac047cd91414b4bffbdeac9454c7595d79a555798103c33fd9d1b80ee2c7f
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Google | Document: Google AI Principles | Record: CA-P-002363
Captured: 2026-04-27 09:45:22 UTC | SHA-256: 01eac047cd91414b…
URL: https://conductatlas.com/platform/google/google-ai-principles/prohibited-ai-applications/
Accessed: April 28, 2026
Classification
Severity
Medium
Categories

Other provisions in this document