This means that when you interact with a GPT-4o-powered application, the safety settings you experience may be significantly different from ChatGPT's defaults — the business operating that app may have disabled certain protections without informing you.
GPT-4o's system card discloses that the model's expressive audio capabilities create risks of emotional dependency, sycophantic reinforcement of user beliefs, and potential manipulation — risks OpenAI acknowledges but has not fully resolved at launch. Users interacting with voice mode may receive outputs calibrated to sound emotionally resonant, which can subtly influence decision-making and foster over-reliance on the AI. You can reduce these risks by using text mode instead of voice mode and by independently verifying any important advice or information GPT-4o provides.