Elon Musk and a thousand experts warn of the major risks for humanity in the development of new artificial intelligence systems.
For three or four years, we have moved into a new generation of artificial intelligence: it is generative and creates content. Previously, research had mainly progressed in the field of automatic learning (“deep learning”): by feeding the databases of computers, we were able to teach them specific tasks, such as how to play the best moves at failures, to recognize a tumor on a medical image or to identify an imitation among art paintings.
>> Artificial intelligence: ChatGPT-4 is more adept than its previous version at generating misinformation
These systems respond to a specific request. But now, “generative” intelligences are able to create something new: write an article, a thesis, draw, compose music, etc. Chat and writing software, ChatGPT, is one of the latest innovations. And its latest version, GPT 4, (which is not yet public, but which experts have been able to test), seems to be particularly impressive.
What impresses so much in ChatGPT 4 is its ability to hold reasoning that is both common sense and very elaborate, and its ability to reason from images or diagrams. For example, this artificial intelligence was asked to solve a polytechnic exam problem in physics, for this, it was necessary to find the right instructions in the right place of the PDF document and then reason from a diagram. The software succeeded. The fear is that this power of algorithms, which impresses the experts themselves, could be mishandled or misused and open the way to more disinformation and all kinds of hacks. Hence the need for security protocols, according to the signatories of this petition.
A sword in the water ?
Although big names in tech and research have signed this petition, not all experts agree. For Jamal Atif, professor at Paris Dauphine and scientific delegate in charge of artificial intelligence at the National Center for Scientific Research (CNRS): “Of course, you need ethics and a regulatory framework, but it is illusory to want to stop research. On the contrary, it is the responsibility of States to develop public research on artificial intelligence, in order to better understand the issues, and to better legislate.
Another urgency is to interest the general public and elected officials in these issues of artificial intelligence, as we do for the climate or genetic research. Because the current speed of innovations makes it possible to envisage several possible futures on the influence, more or less great, of artificial intelligence in our society. The stakes are such that we must take an interest in them now.