The regulation of artificial intelligence is one of the major questions for several leaders of the planet. The European Union will vote on Wednesday a draft regulation on the subject.
How can we regulate the excesses of artificial intelligence? Joe Biden, Rishi Sunak and other leaders are asking themselves this question, London will even host the first world summit on this subject soon. For its part, the European Union is clearly positioned at the forefront, with a draft regulation on artificial intelligence. This text, presented as ambitious, will be voted on Wednesday, June 14 by MEPs in Strasbourg. It has been in preparation for several years, the initial version of the European Commission dates from April 2021, i.e. before the eruption of ChatGPT or of these software capable of creating photos more real than life.
>> “One day, AI will look at us like fossils”: science fiction has long been concerned about the capabilities of artificial intelligence
These generative artificial intelligence systems were invited into the debate, and the deputies had to take them into account, recognizes Geoffroy Didier, vice-president of the Artificial Intelligence Committee at the European Parliament. “We therefore had to adapt to this new societal reality by including it in the parliamentary work that was in progress”he develops. “There will be reporting obligations on the part of all the designers of these technologies so that all those who have access to these technologies know that it is indeed an artificial intelligence and that these technologies are not misused to fake reality or to deceive those who observe them.” An example: photos generated by an artificial intelligence will in the future have to carry a mention which will indicate that it is a false image.
A pyramid of risk levels
Voices have been raised in recent months to call for vigilance. And not just any: creators of these technologies, like Sam Altman himself, the father of ChatGPT. He fears he has opened Pandora’s box. “There are many risks”he explains.
“What particularly worries me is that these models could be used for large-scale disinformation. I am also worried because these systems could be used to carry out cyberattacks.”
Sam Altman, creator of ChatGPT
The European Union regulations notably provide for a risk pyramid with four levels. The first level designates the least risky programs, which do not require any particular control, certain video games for example. Conversely, the fourth level concerns the risks “unacceptable” such as facial recognition databases, which would simply be banned.
>> TESTIMONIALS. Artificial intelligence: “cheat” for some, “resource” for others, the use of ChatGPT is debated between students and teachers at the university
But between the first and the last level, there are two categories: limited risk technology or high risk technology. The whole difficulty is to know in which of these two categories to classify the software that we see emerging like ChatGPT, which is able to write an essay in place of a student, or even to replace certain jobs of secretaries.
Distinguish between creator and user
Other voices plead for regulation that does not hamper innovation too much. At the head, Cédric O, former Secretary of State for Digital. According to him, Europe risks shooting itself in the foot. “The text as proposed by the Parliament requires such an administrative overload and it imposes so many technical constraints on those who develop or adopt artificial intelligence technologies”he says. “Which is a huge problem, because it’s really a revolution that is close to what we have seen with automation in factories or nuclear energy. So regulating, yes, but we also have to do so that we have companies that master these technologies.”
>> “Artificial intelligence should not be seen as an enemy”: these workers who use ChatGPT as a personal assistant without the knowledge of their employer
In this area, Europe and France have the potential to deal with American or Chinese technologies. The start-up LightOn, a French-style ChatGPT, based in Paris, offers software to companies to automate certain tasks. Laurent Daudet, its co-founder, calls on regulators to find the right balance and make allowances. “We must establish the responsibility of the model producer who does not deliver a finished product, he just delivers the technological bricks”he argues.
“It is the end user who will decide that this technological brick, he will use it to help his internal search engine or to help spread disinformation.”
Laurent Daudet, co-founder of LightOnat franceinfo
The debate is far from over. MEPs vote on the draft regulation on Tuesday 13 April, but there will then be a series of negotiations between the European Commission, the Council of the European Union and the European Parliament. The text should therefore evolve between now and its adoption, scheduled for the end of the year.