Google · Google AI Principles

Fairness and Bias Avoidance

Medium severity
Share 𝕏 Share in Share

What it is

Google commits to designing its AI systems to avoid discriminating against people based on race, gender, religion, disability, or other protected characteristics.

Consumer impact (what this means for users)

This provision means Google acknowledges its AI can produce biased outputs affecting people's access to information and opportunities, and commits to reduce this — but consumers who experience discriminatory AI outputs have limited recourse beyond filing regulatory complaints.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Dispute a Fee
    If you believe a Google AI product produced a discriminatory outcome affecting you (e.g., biased search results, discriminatory recommendations), file a complaint with the FTC at reportfraud.ftc.gov describing the specific AI product, the output you received, and why you believe it was discriminatory.

How other platforms handle this

Google Gemini Medium

Don't enter confidential information in your Gemini Apps conversations. For example, if you're using a Gemini app to help with code, don't paste confidential source code into the conversation. To the extent possible, please don't share information in your Gemini Apps conversations that you wouldn't ...

Webull Medium

Webull data is not intended to provide financial, legal, tax or investment advice or recommendations. You are solely responsible for determining whether any investment, investment strategy or related transaction is appropriate for you based on your personal investment objectives, financial circumsta...

Noom Medium

We are not a licensed medical service provider, and any information provided by us should not be interpreted as medical advice or construed to form a physician-patient relationship. Be sure to talk to your doctor before starting Noom or any health or wellness service, and don't use Noom if you're ha...

See all platforms with this clause type →
Need full compliance memos? See Professional →

Why it matters (compliance & risk perspective)

Algorithmic bias in AI systems used for consequential decisions (search rankings, loan approvals, healthcare recommendations) can cause real-world harm to protected groups — this commitment creates a public standard but no enforcement mechanism.

View original clause language
Avoid creating or reinforcing unfair bias. AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

Institutional analysis (Compliance & legal intelligence)

REGULATORY FRAMEWORK: This provision directly implicates the EU AI Act Art. 10 (data governance requirements to minimize bias in training data for high-risk AI), GDPR Art. 22 (rights related to automated decision-making), US Fair Housing Act (42 U.S.C. § 3604) and Equal Credit Opportunity Act (15 U.S.C. § 1691) where AI affects housing or credit decisions, Title VII of the Civil Rights Act where AI is used in employment contexts, and the FTC Act Section 5 for deceptive practices. The CFPB has also issued guidance on AI bias in financial services contexts. Enforcement authorities include the FTC, CFPB, EEOC, HUD, and EU AI Office.

🔒

Compliance intelligence locked

Regulatory citations, enforcement risk, and due diligence action items.

Watcher $9.99/mo Professional $149/mo

Watcher: regulatory citations. Professional: full compliance memo.

Applicable agencies

  • FTC
    FTC has enforcement authority over algorithmic bias as an unfair or deceptive practice under FTC Act Section 5, and has identified AI bias as a priority enforcement area.
    File a complaint →
  • CFPB
    CFPB has jurisdiction over AI bias in financial services contexts, including credit decisions made using Google AI tools.
    File a complaint →

Provision details

Document information
Document
Google AI Principles
Entity
Google
Document last updated
March 24, 2026
Tracking information
First tracked
April 27, 2026
Last verified
April 27, 2026
Record ID
CA-P-002365
Document ID
CA-D-00016
Evidence Provenance
Source URL
Wayback Machine
SHA-256
01eac047cd91414b4bffbdeac9454c7595d79a555798103c33fd9d1b80ee2c7f
Verified
✓ Snapshot stored   ✓ Change verified
How to Cite
ConductAtlas Policy Archive
Entity: Google | Document: Google AI Principles | Record: CA-P-002365
Captured: 2026-04-27 09:45:22 UTC | SHA-256: 01eac047cd91414b…
URL: https://conductatlas.com/platform/google/google-ai-principles/fairness-and-bias-avoidance/
Accessed: April 28, 2026
Classification
Severity
Medium
Categories

Other provisions in this document