The first global summit on the risks of artificial intelligence opened on November 1 near London. It comes at a time when Joe Biden recently announced measures to regulate AI and avoid the risks linked to this technology.
The first world summit on the risks of artificial intelligence opened on Wednesday, November 1, near London, with the British government wanting to play a pivotal role in its supervision. The European Union, however, did not wait for its neighbor since it has been working since 2018 on a law aimed at legislating artificial intelligence which will be directly applied in the 27 countries.
The debate has been long since the popularity of generative AI, such as ChatGPT, has led to new proposals that are currently being developed. The United Kingdom is only following the movement because its hope is to impose its legal framework on the model of the regulation of platforms and social networks that the EU imposes to operate on its territory, while it is the States -United States which dominate the new technology industry.
This regulation is the most complicated in the legislative history of the EU since it generated the most amendments during its examination. The 27 EU states agreed in December, MEPs in June and negotiations between them must be concluded by the end of the year. It is AI surveillance tools that are likely to be the hardest negotiated, such as biometric data tools for facial recognition or social rating, used by China for mass surveillance. Parliament considers them intrusive but the 27 would like to be able to authorize them in exceptional cases of the fight against crime and terrorism, for example.
General regulatory principles are ready to be activated
For generative AI, the regulation plans to prevent the creation of illegal content. In general, Speaker 2:, a mention of the intervention of artificial intelligence in the creation of the content, as well as a mention of the data used if it is protected by copyright, is expected.
Parliament’s idea is to have AI systems that guarantee security, transparency and non-discrimination. For their part, the 27 have established a gradation between the different types of applications of artificial intelligence with four categories which go from limited risk to high risk, then unacceptable. The Europeans are considering fines which could reach 20 to 30 million euros and 4 to 6% of the company’s global turnover.
The United States, a pioneer country facing the risks linked to AI.
The White House had already asked, in July 2023, leading groups in this field, such as Google, Meta and Microsoft, to carry out security tests of their products to ensure, for example, that these products cannot be hacked and used. misuse. However, with this decree, the government places itself in the position of supervisor of these tests. The text in fact requires companies to automatically share their results before placing their products on the market. The government will also be able to ensure that the tests are respected.
Joe Biden also calls on the various federal ministries and agencies to establish charters to regulate the use of artificial intelligence, particularly in schools or even research laboratories and nuclear power plants.
Among other measures adopted, there is the development of tools to easily identify content produced by artificial intelligence. This content can be dangerous, particularly in the context of an electoral campaign or in terms of the national security of a country. The American president also wants developers to commit to ensuring that their products do not carry discriminatory biases. This has already been seen on several occasions, notably in the case of Chat GPT which sometimes gives responses considered racist.
This decree from Joe Biden is still quite limited, since to have real change, a framework law must be passed by the American Congress. However, it is currently not on the agenda. The American president, for his part, believes that this will allow his government not to be overtaken by this technology which is evolving very quickly. Washington wanting to be at the forefront of regulating artificial intelligence.