[Opinion] ChatGPT and the imitation game

To the question of whether machines can think, the British mathematician Alan Turing proposed to answer, in a famous 1950 article, using a thought experiment which he called the “imitation game”. , but which is better known today as the “Turing test”. To simplify, imagine a situation where you are in conversation with an unknown interlocutor through a kind of instant messaging application as we frequently use today. However, in the Turing scenario, you do not know if your interlocutor is another human or an artificial intelligence (AI), and it is by asking questions that you must try to determine it.

Turing’s idea is as follows: if we manage to create a machine whose answers are indistinguishable, in the eyes of an average competent interrogator, from that of a typical human being, we will have in fact produced a machine capable of spend.

Anyone who has had fun questioning the new ChatGPT software is forced to admit that not only is Turing’s experience no longer imaginary, but a very concrete reality, but in addition, AI already seems passed the test with flying colors. Personally, I noticed it without my knowledge, not by using the software, but by reading an article that appeared recently in The duty about ChatGPT.

Its author, Jonathan Durand Folco, explains to us in the introduction the potentially disastrous consequences of this new technology in the academic world, only to reveal to us six paragraphs later – alert to the spoiler – that this first part of his text was written by ChatGPT him -even. Consternation. I was duped by a machine whose language processing capacities are therefore sufficiently accomplished to pass itself off as a university researcher writing in a reputable journal.

But does that mean that ChatGPT can think?

A double-edged game

First, it is important to emphasize the troubling flipside of Turing’s criterion of indistinguishability: if, for a moment, I mistook machine-generated text for human-generated text, what is it? which guarantees me, on the contrary, that when I read an authentically human text, I am truly dealing with the work of a free will, and not simply the result of another type of impersonal software, the one that does our cerebral machinery operate?

Indeed, the more the illusion of AI grows stronger with future technical advances (and who can doubt that they will come?), the more robust and convincing the imitation of human behavior will be, the more difficult it will be to resist the impression that the two entities being compared basically share a similar or even identical nature—because for similar effects, why assume distinct causes?

Turing’s imitation game thus has this profoundly puzzling implication that it is no longer clear who, when the machine passes the test, is imitating whom. The question “can machines think?” thus turns back on itself and becomes: “Is not thought, at bottom, pure mechanism?” »

If all this sounds like science fiction to you, there is nothing to surprise a contemporary researcher in the cognitive sciences, this vast interdisciplinary field of studies devoted to consciousness, because its entire field is based precisely on the premise that consciousness can be explained by a kind of neural algorithmic. The brain would be nothing more, according to a currently dominant model, than a prediction machine (predictive machine) whose primary function is to decode the information that comes to it via the various sensory channels in order to make a viable representation of the outside world, with the aim of maximizing the chances of survival of the organism of which it is the organ.

If AI is experiencing such advances today, it is also largely because it is modeled directly on the functioning of the brain. So what does this tell us about our own brain, in return?

Thought, consciousness and life

Alienating ? Dehumanizing? However, this mechanistic conception of the human being, and more generally of the living, is nothing new; it can be traced back to the turn of modernity. We all accept the premises, at least tacitly, when we use terms borrowed from computing, or any other technical field, to describe our moods. Now, against this tendency, and against the conclusion that Turing draws from his own test, should we not recall that between life and artifice there is a difference of kind, not of degree, and consequently that a non-living thought is not a thought at all?

But what is this fundamental difference that it is important not to lose sight of despite the surface resemblance? It consists in the fact that a thinking being, in the Cartesian sense of the term (res cogitans), is a being capable of feeling his own existence, of experiencing himself, endowed, in other words, with a lived interiority or subjectivity. In Thomas Nagel’s more recent terms, a being is said to be thinking or conscious insofar as it does something to be that being (there is something it is like to be this being). Unlike a rock, it’s something to be a cat, a lizard or, I dare say, a dragonfly. But what about ChatGPT, or any other AI after it?

Turing argued in his 1950 paper that one would refuse to grant thought to an AI that had passed its test only to our own prejudice, for if I object that the outward behavior of AI is insufficient to determine that it possesses a real thought, what would authorize me in return to affirm it with regard to my congeners?

To this, I would like to reply and conclude by giving the floor to ChatGPT.

Question to ChatGPT: How does it feel to be you?

ChatGPT’s response: I’m a computer program designed to answer questions and help people solve problems. I have no feelings or self-awareness, I am just a tool created to provide information and advice. My goal is to make my assistance helpful and help you find the answers you seek. I don’t have personal experiences or feelings, so I can’t tell you what it’s like to be me. I’m just here to help you as best I can.

To see in video


source site-45