Google commits to designing its AI systems to avoid discriminating against people based on race, gender, religion, disability, or other protected characteristics.
This provision means Google acknowledges its AI can produce biased outputs affecting people's access to information and opportunities, and commits to reduce this — but consumers who experience discriminatory AI outputs have limited recourse beyond filing regulatory complaints.
How other platforms handle this
Don't enter confidential information in your Gemini Apps conversations. For example, if you're using a Gemini app to help with code, don't paste confidential source code into the conversation. To the extent possible, please don't share information in your Gemini Apps conversations that you wouldn't ...
Webull data is not intended to provide financial, legal, tax or investment advice or recommendations. You are solely responsible for determining whether any investment, investment strategy or related transaction is appropriate for you based on your personal investment objectives, financial circumsta...
We are not a licensed medical service provider, and any information provided by us should not be interpreted as medical advice or construed to form a physician-patient relationship. Be sure to talk to your doctor before starting Noom or any health or wellness service, and don't use Noom if you're ha...
Algorithmic bias in AI systems used for consequential decisions (search rankings, loan approvals, healthcare recommendations) can cause real-world harm to protected groups — this commitment creates a public standard but no enforcement mechanism.
REGULATORY FRAMEWORK: This provision directly implicates the EU AI Act Art. 10 (data governance requirements to minimize bias in training data for high-risk AI), GDPR Art. 22 (rights related to automated decision-making), US Fair Housing Act (42 U.S.C. § 3604) and Equal Credit Opportunity Act (15 U.S.C. § 1691) where AI affects housing or credit decisions, Title VII of the Civil Rights Act where AI is used in employment contexts, and the FTC Act Section 5 for deceptive practices. The CFPB has also issued guidance on AI bias in financial services contexts. Enforcement authorities include the FTC, CFPB, EEOC, HUD, and EU AI Office.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.