Facial recognition, “deepfakes”, ChatGPT… How the European Union wants to regulate artificial intelligence

The European Parliament is due to vote on Wednesday on the “Artificial Intelligence Act” proposed by Brussels.

The European Union intends to establish the first legal framework in the world against the excesses of artificial intelligence (AI). Meeting in plenary session in Strasbourg, the European Parliament must adopt, on Wednesday June 14, its position on the “Artificial Intelligence Act”, a proposal for a regulation tabled in April 2021 by the European Commission.

The text will then be the subject of a series of negotiations between the different institutions with a view to reaching an agreement. If adopted before the end of the year, the project will enter into force “at the earliest end of 2025”, declared in early June the European Commissioner for Digital, Thierry Breton, to AFP. Franceinfo details the main axes of this plan.

Strict limits on facial recognition

The European project is based on a classification of artificial intelligence according to the level of “risks” that they present. The EU thus intends to prohibit systems that pose a “unacceptable risk” because they constitute a “threat to people”, can we read on the website of the European Parliament. This category includes in particular “real-time and remote biometric identification systems”such as facial recognition in public places.

The draft regulation presented by the European Commission, however, provides for exceptions so that the police can have access to these identification devices, subject to having a judicial or administrative authorization, to prevent a terrorist attack, to seek children reported missing or a person subject to a European warrant after a “serious crime”. MEPs amended these exceptions to authorize the use of identification systems only “a posteriori”, and only “for the prosecution of serious offenses and only after judicial authorization”.

In addition, the European Union intends to prohibit social rating systems, which aim to classify people according to their behavior or their socio-economic status. Such a device already exists in China to access certain services, such as transport.

Technologies aimed at manipulating vulnerable people, such as children or people with disabilities, are also targeted by the regulation. These include, for example, toys equipped with voice assistants that would encourage dangerous behavior, details the presentation of the draft Commission regulation (PDF). Finally, this category also includes systems that use “subliminal manipulation” (via, for example, ultrasound) that may cause physical or psychological danger to users who are exposed to it.

Controls for “high risk” technologies

The European Union also wants to impose a series of rules on artificial intelligence considered “high risk”. This category includes “all the systems which will have to be assessed by the national agencies before their implementation”explains on France Inter Nathalie Devillier, doctor of law and expert for the European Commission on artificial intelligence issues.

AI used in sensitive areas, such as the management and operation of critical infrastructure, education, human resources, law enforcement or even the management of migration and border control, are particularly concerned. MEPs also added to this list the systems used “to influence voters during political campaigns”as well as the devices “recommendation systems used by social media platforms”, details a press release from Parliament.

These technologies will be subject to several obligations, including the need to provide for human control or to set up a risk management system. Compliance with the rules will be monitored by designated supervisory authorities in each member country. MEPs also want to support the right of citizens to lodge complaints against AI systems and to receive explanations regarding decisions based on technologies classified as “high risk”.

Transparency rules for “limited risk” AIs

In sectors where risks are considered “boundaries”the Commission and the European Parliament are calling for rules of “minimal transparency”. Users will need to be notified that they are facing an AI. This includes in particular applications that generate or manipulate audiovisual content, such as “deep fakes”illustrates the Parliament.

However, these technologies will not be subject to preliminary assessment before placing them on the market. “Ioperator [pourra] operate his system right away, but he has an obligation of transparency, he must explain how the system works”, summary Nathalie Devillier.

Finally, in areas where the risks are considered “minimal”, the text does not provide “no specific framework”, reports the public life site. Connected objects of everyday life, of the house, such as washing machines, fridges, as well as connected watches, would fall into this category, relates France Inter.

Specific requirements for generative AIs like ChatGPT

The draft regulation also intends to better take into account generative content AIs, such as ChatGPT or Midjourney, by imposing transparency rules on them. Besides the need for iTo clearly inform the user that he is in contact with a machine, the applications will also have to specify that a content (image, sound, text) has been created artificially. It should also be stipulated that these technologies cannot be “misused to fake reality or to deceive those who observe them”, underlines Geoffroy Didier, vice-president of the commission on artificial intelligence in the European Parliament, with franceinfo.

MEPs further want the companies behind these systems to be forced to devise a model to prevent it from generating illegal content. In addition, they will be required to publish the data which, protected by copyright (scientific texts, music, photos, etc.), has been used to develop their algorithms.


source site-29