The writer and the conversational robot

Until very recently, I paid little attention to the emergence of conversational robots. According to the information I collected, they displayed a lack of precision, they did not cite their sources, they invented things. Few of their abilities justified the widespread enthusiasm. During a conversation with a human, I learned that there was software that allows you to know if a text is produced by a conversational agent. Websites offer to calculate the percentage of humanity in a text. We can therefore detect a simulacrum of human text. The problem for us humans is that we now depend on a machine to discern the origin of a piece of writing. The problem does not stop there, and becomes almost insoluble, since other platforms offer us to humanize (remember the term) the text to make it undetectable by software that allows us to measure its humanity!

From my point of view as a writer, the conversational agent seems to me to be a prime suspect, an imposter perhaps sympathetic, but already gifted with ink that is difficult to erase. By delving into the subject, I understood that our conversation agents are the third version of the self-proclaimed artificial intelligence which has experienced two booms and two failures. Based on the notion of deep learning, the marvel of the moment was made possible by the massive use of textual and pictorial data from the Web.

For computer scientists and their deep learning students, human texts are a set of data from which they extract correlations. The text is divided into tokens, to which the software gives a relative weight. To write a text, the conversational agent renders another set of data based on the probability of the appearance of a token following another token. The conception of language of such an agent differs radically from that of a human.

This is what Noam Chomsky noted. For him, conversational agents mine enormous amounts of data, look for patterns in it, and become increasingly proficient at generating statistically probable outcomes, such as seemingly human-like language and thought. For Chomsky, these robots teach us nothing about the functioning of language and human intelligence. Being human, with imperfect judgment (I apologize to readers), I can say that this answer fully satisfies me. Chomsky wonders whether to laugh or cry about it. For now, I think we should laugh about it.

I continued my investigation by testing the value of ChatGPT as a search engine and writing tool. As for the free version, ChatGPT adds no value to research, tampers with biographies, confuses authors, invents works and does not indicate its sources. As for the paid version, we will have a more precise text, no sources mentioned, a result often much poorer than a Wikipedia article, therefore no added value, except a little fabrication.

I asked ChatGPT 4.0 for some author biographies. After querying Bing, the robot writes a few paragraphs, most of the time acceptable. For my bio, ChatGPT listed the correct information, but added that I am an important poet. Obviously, this is an invention. Not wanting to contradict such a brave tool, I asked him if Claude Paré was an important poet. He replied “Yes”. I salute his insight, and I can proclaim that artificial intelligence has affirmed that I am an important poet!

But enough jokes. As we can see, these conversational agents are barkers with vague comments. We are entering an era of falsehood and a spiral of simulacra, from which we will only emerge with difficulty. The first falsehood being that an intelligence writes texts in our place which will seem human, but which will never be!

A literary work is necessarily written by a human with his body and his mind, without the result being revealed to him instantly, following the meanders of creation. A robot produces texts and images, does not draw or write. Writing and drawing are human gestures, and literature is the work of living people. Literary text and language cannot be reduced to a set of data.

Already, it is not possible to verify whether a literary creation is human or not. This is a first consequence of the introduction of these agents into the public space. On the other hand, the notions of writing and reading are likely to change considerably, because the machine can read and write instantly, based on a probabilistic conception of the text and writing.

The expression artificial intelligence is initially false, because it implies the existence of a diffuse and immaterial artificial intelligence. Rather, it must be affirmed that there is a range of software implementations and applications with different modalities and functions.

Few people contradict the discourse on the intelligence of these computer products. Several, including Luc Julia, affirm that it is not a question of intelligence. According to them, connectionism and deep learning cannot even remotely approximate human intelligence, consciousness and language ability acquired from childhood. A drift is possible in the meaning of the word intelligence, as was the case for Facebook which became social networks. As proof, the words of this French intellectual: when did social networks cut off sociability. Isn’t this asociability the result of so-called intelligent algorithms?

Human intelligence risks being devalued if statistical correlation operations are defined as intelligence, even if they can produce valid and usable results. For my part, I refute the word intelligence in the expression artificial intelligence. In my opinion, intelligence is a result of the evolution of living things. And, for me, artificial intelligence is written as AI. Using this acronym is a way of affirming that a text, that a work is produced by a human. I will henceforth put this acronym on my writings and my publications, and I invite other writers to do so.

To watch on video


source site-42