(Seoul) The second global “summit” on artificial intelligence (AI) concluded Wednesday in Seoul with a collective commitment to managing the dangers of this technology, but the crazy speed with which it is developing promises them many difficulties .
During this event organized jointly by South Korea and the United Kingdom, the leaders of the sector – from South Korean Samsung Electronics to the American Google via OpenAI, the creator of ChatGPT – codified their commitments in a document titled “Seoul AI Business Pledge”.
The day before, sixteen of the biggest players in the sector had already signed an agreement to guarantee the security of AI, building on the consensus reached during the first global “summit” on the subject, in 2023 in Bletchley Park (UK). -United).
In particular, they promised to define the risks “deemed intolerable”, and what companies will do to prevent them. The signatories also committed to “not developing or deploying a model or system” whose risks would prove too difficult to control.
But experts say it is difficult for regulators to understand and manage AI, given the lightning speed with which it is developing.
“I think it’s a very, very big problem,” warns Markus Anderljung of the Center for AI Governance, a research organization based in Oxford, UK.
Getting in tune
“AI will be one of the biggest challenges that governments around the world will face over the next two decades,” predicts this expert. “The world will need to develop some kind of common understanding of the risks associated with the most advanced general models.”
For Michelle Donelan, the British Secretary of State for Science, Innovation and Technology, “as the pace of development of AI accelerates, we must keep pace […] if we want to control the risks.”
At the next AI “summit” on February 10-11, 2025 in France, there will be more opportunities to “push the boundaries” in terms of testing and evaluating new technologies, predicts Mme Donelan.
“At the same time, we must focus our attention on mitigating risks outside of these models, ensuring that society as a whole becomes resilient to the dangers posed by AI,” adds the Secretary of State.
The runaway success of ChatGPT shortly after its 2022 release sparked a rush in the generative AI field, with tech companies around the world investing billions of dollars into developing their own models.
Generative AI models can produce text, photos, audio, and even videos from simple command prompts. Their supporters present them as a breakthrough that will improve the lives of citizens and businesses around the world.
But human rights defenders and governments also fear their misuse in a wide range of situations, including to manipulate voters through fake news or “deepfake” photos and videos of political leaders.
Many are demanding that international standards be established to govern the development and use of AI.
Sci-fi nightmares
“More and more, we realize that global cooperation is needed to really think through the problems and harmful effects of artificial intelligence. AI knows no boundaries,” said Rumman Chowdhury, an AI ethics expert who heads Humane Intelligence, an independent organization that evaluates AI models.
According to her, the danger comes not only from the “rampant AI” of science fiction nightmares, but also from the inequality faced by this technology, while AI is developed by “a very, very small number of people and organizations” who reap the benefits.
As for people in developing countries, like India, “they are often the ones who do the cleaning. They are the data annotators, the content moderators. They clean the ground so that everyone else can walk on virgin ground,” regrets M.me Chowdhury.