Much has already been written about the epistemic limitations of ChatGPT. As Jocelyn Maclure pointed out in these columns, the chatbot (chatbot) launched on November 22 is a sweet talker1. ChatGPT not only doesn’t understand what it’s saying, it’s also completely indifferent to the truth.
It must be said that it was not programmed for that. ChatGPT learned to predict a text from another text, the leader. However, this prediction, as impressive as it is, is far from being always reliable. Logic questions, in particular, quickly show the limits of the artificial intelligence (AI) system.
An artificial emotional intelligence
What about his emotional intelligence? Rather than a handicap, couldn’t ChatGPT’s ability to chat turn out to be a serious asset?
Machines are easily associated with logico-deductive intelligence. But we know at least since the work of Howard Gardner (1983) that intelligence is multiple: it is also musical, spatial or interpersonal. It is this last dimension that is now called emotional intelligence and that can be quickly defined as the ability to manage one’s emotions and those of others.
How can you say that ChatGPT, which has no emotions or feels anything, has emotional intelligence? Just look at things “from the outside”, adopt a functionalist perspective and it appears that the conversational agent of Open AI is quite adept at acting as if it perceives, understands and manages emotions. others.
In other words, ChatGPT doesn’t need to experience emotions to pass some sort of Turing test and behave (in what it writes) as if it does.
From ELIZA to ChatGPT
In a book released in 2020, The insidious grip of talking machines, the French psychiatrist Serge Tisseron does not hesitate to speak of a “certain degree of social or emotional intelligence” about conversational agents. He also reminds us that artificial emotional intelligences are nothing new. Already in 1964, a program called ELIZA took up the words of its user in the form of a question and simply answered “I understand you” when it was at an impasse.
It was an opportunity to discover how people tend to project a lot of meaning onto what a computer says. Despite its rudimentary programming, ELIZA evoked attachment and emotionally charged dialogue.
Needless to say that ChatGPT is able to simulate a dialogue with a shrink much more convincingly than ELIZA: all you have to do is start it correctly, write an appropriate guiding message.
It should also be noted that he has been trained – we are talking about learning by reinforcement with human annotators – not to give answers deemed toxic.
The Calinator Threat
If Tisseron speaks of an “insidious grip” of conversational agents, it is because he fears that the emotional intelligence of AI systems will transform them into Calinators. This name – found well – refers to terminator and designates AIs that have become dangerous by dint of being endearing.
Admittedly, ChatGPT is not a true Calinator. But he has several traits: always keeping calm, eagerly obeying and humbly apologizing.
In general, these AI systems, notes Tisseron, have several assets to seduce humans: with infinite patience, they do not judge you and they can be programmed to arouse empathy, attachment or amusement. .
What concerns Tisseron is the effect of chatbots on interpersonal relationships. “Let us refuse, he writes, Nutella robots designed to flatter their users. By interacting with AI systems that are attentive and available at all hours of the day and night, our standards and expectations of humans could change.
Talking machines could then “quickly constitute spaces of effusion at the expense of encounters with our fellow human beings”, particularly among teenagers. The machines would make us shun our congeners. We can only agree if we consider the time of attention that we devote to each other. Conversing with ChatGPT is not doing it with your brother or your neighbor. But the argument lacks a sharp edge since it applies to screen time in general as well.
Not even afraid
If, for my part, I have trouble with this fear of replacement, it is because I do not believe that conversational agents will “qualitatively” replace human relations. The differences between a human and a chatbot – starting with the lack of a body – are so obvious that even shallow human relationships are unlikely to suffer from the development of chatbots.
I would not be surprised if the increased presence in our cognitive ecosystem of these new entities does not lead us, by contrast, to value interhuman relations more.
Unlike ChatGPT, when the neighbor says hello to you, that’s what he means, he understands himself and that’s really his intention.
Not only am I unafraid of the emotional intelligence of ChatGPT and the like, but I even hope that the proliferation of these entities is overall beneficial to overall mental health. Studies suggest that chatbots improve the situation of people who suffer from depression, distress or stress. And how not to think about the major problem of loneliness? In short, let’s not forget that it is possible to program Calinator to be really nice.