The giants of artificial intelligence challenged to supervise a technology in perpetual development

The second global “summit” on artificial intelligence (AI) concluded on Wednesday in Seoul with a collective commitment to managing the dangers of this technology, but the crazy speed with which it is developing promises many difficulties.

During this event organized jointly by South Korea and the United Kingdom, leaders in the sector – from South Korean Samsung to the American Google, including OpenAI, the creator of ChatGPT – codified their commitments in a document titled Seoul AI Business Pledge (Seoul Business Commitment on AI).

At the same time, more than a dozen countries, including the United States and France, have agreed to work together against the threats posed by advanced AI, including “serious risks,” according to a joint statement from these countries .

These risks could include an AI system helping “non-state actors advance the development, production, acquisition or use of chemical or biological weapons,” or being able to “evade human oversight, including through circumvention of protective measures, manipulation and deception, or replication and autonomous adaptation,” according to this press release.

The day before, 16 of the biggest players in the sector had already signed an agreement to guarantee the security of AI, building on the consensus reached during the first global “summit” on the subject, held in 2023 at Bletchley Park, UK. In particular, they promised to define the risks “deemed intolerable”, and what companies will do to prevent them. The signatories also committed to “not developing or deploying a model or system” whose risks would prove too difficult to control.

Getting in tune

But experts say it is difficult for regulators to understand and manage AI given the lightning speed with which it is developing.

“I think it’s a very, very big problem,” warns Markus Anderljung of the Center for AI Governance, a research organization based in Oxford, UK. “AI will be one of the biggest challenges that governments around the world will face over the next two decades,” predicts this expert. “The world will need to develop some kind of common understanding of the risks associated with the most advanced general models. »

For Michelle Donelan, the British Secretary of State for Science, Innovation and Technology, “as the pace of development of AI accelerates, we must keep pace […] if we want to control the risks.”

At the next AI “summit” on February 10-11, 2025 in France, there will be more opportunities to “push the boundaries” in terms of testing and evaluating new technologies, predicts Mme Donelan. “At the same time, we must focus our attention on mitigating risks outside of these models, ensuring that society as a whole becomes resilient to the dangers posed by AI,” adds the Secretary of State.

The runaway success of ChatGPT shortly after its 2022 release sparked a rush in the generative AI field, with tech companies around the world investing billions of dollars into developing their own models.

Between advances and dangers

Generative AI models can produce text, photos, audio, and even videos from simple command prompts. Their supporters present them as a breakthrough that will improve the lives of citizens and businesses around the world. But human rights advocates and governments also fear their misuse in a wide range of situations, including to manipulate voters through fake news or hyper-doctored images of political leaders.

Many are demanding that international standards be established to govern the development and use of AI.

“Increasingly, we understand that global cooperation is needed to really think through the problems and harmful effects of artificial intelligence. AI knows no boundaries,” said Rumman Chowdhury, an AI ethics expert who heads Humane Intelligence, an independent organization that evaluates AI models. According to her, the danger comes not only from the “rampant AI” of science fiction nightmares, but also from the inequality faced by this technology, while AI is developed by “a very, very small number of people and organizations” who reap the benefits.

As for people in developing countries, like India, “they are often the ones who do the cleaning. They are the data annotators, the content moderators. They clean the ground so that everyone else can walk on virgin ground,” regrets M.me Chowdhury.

To watch on video


source site-43

Latest