The European Parliament wants to better regulate ChatGPT

The European artificial intelligence regulation project passed a crucial stage on Thursday by obtaining a first green light from MEPs who called for new bans and better consideration of the ChatGPT phenomenon.

The European Union wants to be the first in the world to adopt a comprehensive legal framework to limit the excesses of artificial intelligence (AI), while securing innovation.

Brussels proposed an ambitious draft regulation two years ago, but its examination is dragging on, delayed in recent months by controversies over the dangers of generative AI capable of creating text or images.

EU member states only defined their position at the end of 2022.

MEPs endorsed theirs in a vote in committee on Thursday morning in Strasbourg, which will have to be confirmed in plenary in June. A difficult negotiation will then begin between the different institutions.

“We received more than 3000 amendments. All you have to do is turn on the TV, every day you can see the importance of this dossier for citizens,” said Dragos Tudorache, co-rapporteur of the text.

“Europe wants an ethical approach, based on people,” summed up Brando Benifei, also co-rapporteur.

Of great technical complexity, artificial intelligence systems fascinate as much as they worry.

While they can save lives by enabling a quantum leap in medical diagnosis, they are also exploited by authoritarian regimes to exercise mass surveillance of citizens.

The general public discovered their immense potential late last year with the release of California-based OpenAI’s editorial content generator ChatGPT, which can write original essays, poems or translations in seconds.

But the dissemination on social networks of false images, more real than life, created from applications like Midjourney, alerted to the risks of manipulation of opinion.

Scientific personalities have even called for a moratorium on the development of the most powerful systems, until they are better regulated by law.

Parliament’s position broadly confirms the Commission’s approach. The text draws on existing regulations on product safety and will impose checks based primarily on businesses.

The human must be in control

The heart of the project consists of a list of rules imposed only on applications deemed to be “high risk” by the companies themselves based on the criteria of the legislator. For the European executive, this would be all the systems used in sensitive areas such as critical infrastructure, education, human resources, law enforcement or migration management…

Among the obligations: provide for human control over the machine, the establishment of technical documentation, or even the establishment of a risk management system.

Their compliance will be monitored by designated supervisory authorities in each member country.

MEPs want to limit the obligations only to products likely to threaten security, health or fundamental rights.

The European Parliament also intends to better take into account generative AIs of the ChatGPT type by calling for a specific regime of obligations which essentially repeat those provided for high-risk systems.

MEPs also want to force providers to put in place protections against illegal content and to reveal the data (scientific texts, music, photos, etc.) protected by copyright and used to develop their algorithms.

The Commission’s proposal, unveiled in April 2021, already provides a framework for AI systems that interact with humans. It will oblige them to inform the user that he is in contact with a machine and will force the applications generating images to specify that they were created artificially.

Bans will be rare. They will concern applications contrary to European values ​​such as the citizen rating systems or mass surveillance used in China.

MEPs want to add a ban on emotion recognition systems and remove the derogations authorizing the remote biometric identification of people in public places by law enforcement.

They also intend to prohibit the mass harvesting of photos on the Internet to train algorithms without the consent of the persons concerned.

To see in video


source site-42