The manifesto of the leaders involved in the development of artificial intelligence (AI) marks the end of the romantic visions that have long inspired the digital policies of the States. Following the invitation of researchers brought together by UNESCO to take an interest in the “blind spots of AI”, this call warns of the risks of an AI that would function like a headless chicken going in all directions. regardless of the harm it may cause.
The signatories of the manifesto wonder if we should develop non-human minds and risk losing control of our civilization. They call for a public and verifiable pause in the development of AI systems, even if governments institute such a moratorium. For these researchers and business leaders, decisions to deploy AI devices should not be left solely to unelected entrepreneurs. Powerful AI systems should only be developed if their effects are positive and their risks are manageable.
We are light years away from the mythical “Declaration of Cyberspace Independence”, this manifesto presented in Davos in 1996 by John Perry Barlow. He enjoined the States of the whole world to stay away from cyberspace constituted by exchanges, relations and by thought itself, deployed like a wave which rises in the network of our communications. A world that is both everywhere and nowhere, but which is “not where bodies live” and in which excesses would self-correct spontaneously.
We now see that our bodies and our minds live to the fullest in this virtual world fueled more and more by AI and data. On a day-to-day basis, the excesses that we suffer there are not self-correcting. Digital technologies, multiplied tenfold by the extraordinary possibilities of AI, can serve the best as well as the worst. To let them spread without imposing obligations on those who develop them is to put the security of our democracies at risk.
The appeal of these scientific and business luminaries has been criticized. Many criticize him for a lack of concrete proposals to meet the challenges posed by AI technologies. It has also been observed that companies like Microsoft have recently liquidated their teams of researchers working on the ethical issues of AI technologies. But beyond these contradictions, the initiative of these leaders of the AI world reminds us that the constant postponement of measures in order to effectively mark out what can be done with these technologies ends up threatening the integrity of democratic processes. .
Real Wealth Creation
AI and other digital technologies are changing the way we work and live. In all spheres of activity, there is an urgent need to better understand the uses and especially the combined effects of AI and other technologies. These powerful devices promise real progress. But they can exclude whole sections of our fellow citizens and increase the capacities of those who have malicious intentions.
Contrary to what those who sign the call for the moratorium assert, it is not up to society to adapt to technologies, but rather to technologies to operate in compliance with the requirements of democratic society. This requires laws that take into account the characteristics of these technologies. For example, the AI has the ability to modify its operation in the light of the situations encountered. It would be futile to attempt to regulate their design and uses by simply renewing the rules applied to infinitely less complex objects.
True progress emanating from technologies like AI is only possible if they are put into use with respect for people’s rights. Effective laws are needed to ensure that malicious uses of these tools are detected and eradicated as far as possible before the damage occurs.
The considerable capacities of AI technologies to generate both wonders and sinister frauds require regulations that impose the identification of risks before their deployment. Similarly, we must strengthen and accelerate the ability to identify and neutralize those who use these tools to scam.
In Canada, Bill C-27 enacting the Artificial Intelligence and Data Act, like the measures put forward by the European Union, plans to oblige the designers and those who deploy information systems AI to assess the risks a priori and report transparently on the precautions put in place to prevent harmful uses. This is an example of a legal framework that should prevent the most vulnerable from subsidizing the profits of those who deploy tools without worrying about their harmful effects.
The chimeras that marked the first decades of the cyberspace era cannot serve as a roadmap for dealing with the risks of connected technologies. The call from AI experts must encourage the acceleration of the implementation of robust laws to oblige designers and users to assess and manage the risks of these powerful tools… unless they resolve to undergo AI as a headless chicken.