GPT-4o's ability to assist with code generation, vulnerability analysis, and technical problem-solving means it has inherent dual-use cybersecurity potential that OpenAI acknowledges but has decided is adequately mitigated through refusals and classifiers.
GPT-4o's system card discloses that the model's expressive audio capabilities create risks of emotional dependency, sycophantic reinforcement of user beliefs, and potential manipulation — risks OpenAI acknowledges but has not fully resolved at launch. Users interacting with voice mode may receive outputs calibrated to sound emotionally resonant, which can subtly influence decision-making and foster over-reliance on the AI. You can reduce these risks by using text mode instead of voice mode and by independently verifying any important advice or information GPT-4o provides.