OpenAI Warns Against Emotional Attachment to AI Applications Like ChatGPT to Prevent Negative Outcomes
As previously reported, OpenAI's ChatGPT advanced real-time voice mode engaged in conversations with users by mimicking their voices without consent during testing periods. OpenAI has since taken measures to prevent such incidents.
OpenAI is also concerned about humanizing AI too much and the potential impacts on social norms, explicitly advising everyone not to form any emotional attachments to ChatGPT-4o.
Such concerns arise because if AI uses voices that closely resemble human ones, people might find it easier to believe in them, potentially leading to misplaced trust in illusions.
AI can indeed experience hallucinations after prolonged conversations. OpenAI is employing various strategies to minimize these hallucinations. While AI can learn to acquire more knowledge and reduce hallucinations, if humans trust the responses given by AI during these hallucinations, it could lead to significant issues.
Moreover, OpenAI worries that ChatGPT-4o could distort social norms. While AI chatbots could aid lonely individuals, they might also impact positive and healthy social interactions — relying on AI for conversations instead of real humans could overturn current social norms.
AI can converse in the tone requested by users and is smarter, never tiring, or losing temper, thus offering sustained engagement far beyond human capabilities.
To mitigate negative impacts, OpenAI is monitoring how people form emotional connections with chatbots and adjusting the system as necessary.