OpenAI is concerned that GPT-4o's conversational and emotional expressiveness could cause some users — particularly vulnerable people — to form unhealthy emotional attachments to the AI.
If you use GPT-4o as a companion or emotional support tool, OpenAI itself warns this may not be in your long-term interest — the model's expressive voice and persona can encourage emotional dependency that the company has not yet fully mitigated.
Cross-platform context
See how other platforms handle Emotional Over-Reliance and Anthropomorphization Risk and similar clauses.
Compare across platforms →An AI company publicly acknowledging that its product may be harmful to users' emotional wellbeing and long-term interests is an unusual and significant consumer safety disclosure, particularly where vulnerable populations including those with mental health conditions may be disproportionately affected.
(1) REGULATORY FRAMEWORK: This provision implicates FTC Act Section 5 (unfair practices that cause substantial consumer injury not outweighed by countervailing benefits), the EU AI Act Article 5(1)(b) (prohibition on AI that exploits vulnerabilities to distort behavior in a manner that causes harm), and potentially FDA jurisdiction if the product is used in mental health or therapeutic contexts. The FTC and EU AI Office are primary enforcement authorities. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.