what is contained in the “historic” compromise agreed upon by the European Union

EU member states and the European Parliament have agreed on a text to regulate the use and development of artificial intelligence. Unprecedented legislation on a global scale.

Published


Reading time: 5 min

The European Commissioner for the Internal Market, Thierry Breton, at the European Parliament in Strasbourg, October 18, 2023. (EUROPEAN UNION / HANS LUCAS / AFP)

“Historic!” welcomed Thierry Breton, the European Commissioner for the Internal Market, Friday December 8. “The EU becomes the first continent to set clear rules for the use of AI”, declared the former French Minister of the Economy on X (formerly Twitter), after three days of intense discussions. In the evening, the member states of the European Union and the European Parliament reached an agreement on the Artificial Intelligence Act, unprecedented legislation aimed at regulating the use and development of artificial intelligence.

The compromise, which has yet to be finalized and adopted by the European Parliament and the Council of the European Union, aims to establish “obligations for AI based on its potential risks and its level of impact”, summarizes a press release. The text “aims to ensure that fundamental rights, democracy, the rule of law and ecological sustainability are protected against high-risk AI, while boosting innovation and making Europe a leader in the field” .

Launched in 2021, this draft legislation was shaken up by the arrival of ChatGPT, an illustration of the vast potential – and possible threats – of generative artificial intelligence. Franceinfo reviews the main points of this unprecedented agreement.

General rules for all AI

The agreement includes measures for general purpose artificial intelligence (GPAI) systems. As the National Commission for Information Technology and Liberties (Cnil) reminds us, such an AI system “can be used and adapted to a wide range of applications for which it was not intentionally and specifically designed.”

These systems, but also the models from which they were created, will be subject to transparency rules, as the European Parliament wanted. This includes developing precise technical documentation, respecting European copyright rules and providing information on the content used for training AI models. Developers must ensure that the sounds, images and texts produced are clearly identified as artificial.

Generally, “THE AI systems presenting only limited risk would be subject to very light transparency obligations, for example by disclosing that the content was generated by AI so that users can make informed decisions about its further use. , summary the Council of the European Union. On the other hand, “the greater the risk, the stricter the rules will be.”

Specific measures depending on the risk

On general-purpose artificial intelligence models exhibiting a “systemic risk”, “Parliament negotiators succeeded in obtaining stricter obligations”, continues the press release from the European Parliament. Model evaluations may be initiated, as may evaluations and efforts to limit these “systemic risks”. In the event of serious incidents, reports will have to be made to the Commission and the compromise also calls for guarantees in terms of cybersecurity.

The text also establishes reinforced constraints for artificial intelligence systems defined as being “high risk”. According to the European Parliament, these are technologies presenting a “significant potential harm to health, safety, fundamental rights, the environment, democracy and the rule of law.” Examples include technologies aimed at altering election results or influencing voters’ decisions. An impact analysis on fundamental rights will be mandatory, as will more transparency.

European citizens will also be able to file complaints against artificial intelligence systems. Finally, the text provides for the obligation to inform people “when they are exposed” to emotion recognition systems. Failure to comply with these various rules exposes AI companies to fines, which could reach up to 7% of annual turnover. “depending on the offense and the size of the company”. A service dedicated to artificial intelligence will also be created within the European Commission.

Prohibitions and exceptions

Faced with the risks posed by certain artificial intelligences, particularly in terms of “citizens’ rights and democracy”, EU member states and MEPs have agreed to ban six types of artificial intelligence systems. These include, for example, biometric categorization systems using information such as political and religious beliefs, or even sexual orientation; technologies retrieving images of faces on the internet or in video surveillance images, “to create facial recognition databases”. Other artificial intelligences, “used to exploit the fragilities of people due to their age, disability or economic situation”or aimed at “manipulate behavior”will also be prohibited.

On biometric identification systems, however, exceptions are provided for in the context of the work of law enforcement. They may use some of these technologies after “an authorization from the courts” and for “a list of strictly defined crimes”. Biometric technologies could be used to search for victims of kidnapping, sexual exploitation or human trafficking, or even in response to a terrorist threat. Law enforcement will also have access to find people suspected of having committed serious crimes, such as murder, rape, environmental crimes or even acts of terrorism.

Efforts to support innovation

The agreement also aims to allow European companies, “especially SMEs”, to develop AI technologies “without pressure from industrial giants who control the value chain”. As recalled Release, European business leaders spoke, in a letter in June, of a “draft legislation [mettant] endangering Europe’s competitiveness and technological sovereignty.

Regulations on the development of new AI systems must allow testing in real conditions, with “specific conditions and guarantees”, specifies the Council of the EU. The compromise also provides for actions to support smaller companies in the field.


source site-25

Latest