When I was a computer science student, we defined artificial intelligence (AI) as the reproduction by a machine of a human act whose deductive process we do not understand. Thus, a computer could be programmed to play chess, but while a human quickly “sees” the move to be played, the computer evaluates thousands of them, and subsequent moves, to determine the most promising move, a procedure certainly different from that of humans. By developing the concept of learning, we moved towards a pattern recognition tool with probabilistic results. For example, after being submitted a million photos of dogs, AI software will have built a visual model of what a dog represents. In front of a new image, he will respond with a score of the probability that it illustrates a dog. And if presented with a photo of a blueberry muffin hat, the model will be able to see, with some probability, the large eyes of a Chihuahua. In comparison, a child who does not yet speak will distinguish a dog from a muffin on his second encounter. But the computer does not have this intelligence. This is how car driver assistance software can cause emergency braking after having assimilated a simple plastic bag drifting on the ground to a solid obstacle.
But today, we push the deception further by invading the field of written or visual creation. Software like ChatGPT, fed with millions of texts, gives us a kind of average, making us believe in a kind of reflection. And as part of the source texts will contain errors or falsehoods, the final result will do the same, the algorithm not being endowed with any real discernment, only with a weighting that could have been attributed to certain sources. What is the interest for Microsoft, and now Google, to integrate these AIs into their search engine? Make more money. Indeed, if, instead of returning a list of websites dealing with the subject sought, the engine can provide an answer that seems adequate, the user will not look elsewhere — at the risk of being fooled by bad answers. But no matter, because he will not see the advertisements of others, but those proposed by the search engine. And if the source documents are, for example, newspaper articles, not being cited or visited, we will not have to pay royalties to their authors. Copying texts or modifying them without citing their sources, passing them off as an original creation, constitutes the definition of plagiarism. AI makes it possible to automate it, and even to anonymize it, because we no longer know how to determine which documents, among the myriad consulted, have been plagiarized. The only real intelligence here is that of humans, those whose work is plagiarized, and those who profit financially from that plagiarism.