(San Francisco) OpenAI, the company behind generative artificial intelligence (AI) ChatGPT, is concerned that its software’s lifelike voice could push users to bond with it, at the expense of human interaction.
“Anthropomorphism is the act of attributing human attitudes or characteristics to something that is not human, such as an AI model,” the company said in a report released Thursday.
“The risk may be enhanced by GPT-4a’s audio features, which facilitate human-like interactions,” the report notes.
The document was published on the eve of the launch of the new version of ChatGPT, GPT-4o, which integrates the ability for the software to respond vocally and allow you to have the equivalent of a conversation with it.
But having the same type of conversation with an AI that you can have with a human could create “misplaced trust” in the software, which the added voice could reinforce.
OpenAI particularly highlights having observed among testers of the new version exchanges with the AI which seemed to show the creation of an emotional bond, such as the expression of regret that it was their last day together.
“Although these cases appear to be inconsequential, they highlight the need for further research into how these effects might manifest themselves in the longer term,” the report concludes.
Entering into a form of socialization relationship with AI could also encourage users to reduce their desire to have relationships with humans, OpenAI anticipates.
“Prolonged interactions with the model could have an effect on social norms. For example, our models are always respectful, allowing users to interrupt them at any time, a behavior that, while normal for an AI, could be outside the norms of social interactions,” the report details.
Substitute
AI’s ability to remember details of conversations and carry out tasks assigned to it could also lead users to rely too much on the technology.
“These new concerns shared by OpenAI about ChatGPT’s potential reliance on voice underscore how the question that is emerging is: should we take the time and seek to understand how technology is affecting human interactions and relationships?” said Alon Yamin, co-founder and CEO of Copyleaks, an AI plagiarism detection platform.
AI is a complementary technology, intended to help us streamline our work and daily lives, it should not become a substitute for real human relationships.
Alon Yamin, Co-Founder and CEO of Copyleaks
OpenAI said it is continuing to study how its AI’s voice function could lead users to become emotionally attached to it.
Testers also managed to get it to repeat false information or create conspiracy theories, adding to concerns about the potential risks of the AI model.
ChatGPT’s voice feature has already sparked widespread backlash, forcing OpenAI to apologize to actress Scarlett Johansson last June for using a voice that sounded very similar to hers, sparking controversy over the risk of voice copying using the technology.
While the company denied using M’s voice,me Johansson, the fact that her boss, Sam Altman, promoted the voice function on social networks by using a single word, “Her”, in reference to the film in which the actress plays an AI did not help her to win over observers.
The film, released in 2013, tells the story of a man, played by actor Joaquin Phoenix, who falls in love with his personal AI, “Samantha”, voiced by Scarlett Johansson.