[Opinion] It’s time to revisit the evolution of AI and the design of technologies

In 1939, world-renowned physicist Albert Einstein and his colleague Leo Szilard wrote a letter to President Franklin Delano Roosevelt. This carefully typed letter warned Roosevelt of the dangers of nuclear fission research. Its contents, as well as the President’s response, now accessible on the Internet, remind us of the potential of scientific advances and the unintended consequences that flow from them.

It is generally accepted that Roosevelt and other heads of state ignored Einstein’s warnings. The bombings of Hiroshima and Nagasaki in 1945 were the direct consequences. Even today, nuclear weapons constitute for the “superpowers” the first threat of planetary destruction and the first means of deterrence. At the same time, anticipation, the application of means of protection and above all government regulations have made nuclear fission an important source of energy for several countries which do not have easy access to other forms of energy. own.

This is proof that in the hands of humanity, scientific and technological advances are not neutral. Although science answers important questions about the nature of inventions, it cannot predict all the possibilities associated with their technological applications or how societies will use and respond to them. Human beings drive and frame these new technologies, and are the first to feel their impact.

More than 80 years later, we find ourselves in a similar and equally potentially destructive situation. With the rapid advance of its technologies, both the advantages and the disadvantages of artificial intelligence (AI) are becoming more and more obvious. Once again, society is overwhelmed by the breakneck pace of technological advancement. It cannot even protect itself against the obvious dangers of AI—loss of jobs, manipulation of information, or fraud—for the benefit of national or international parties. Even some of the early architects and adopters of AI, both in academia and in the general population, are sounding the alarm.

In Canada, Yoshua Bengio and Geoffrey Hinton, two eminent researchers and leaders in the field, have publicly denounced the rapid development of AI, as well as its ubiquitous use, its possible repercussions on society and the lack of regulation in the matter. In an open letter published in March 2023, technology leaders state that “artificial intelligence systems whose intelligence can be measured against human intelligence pose serious risks to humanity and society”.

This letter was signed by 1300 people, including Yoshua Bengio. In the letter, the researchers call on AI developers to work with policymakers to dramatically accelerate the development of robust governance systems, including through public funding of security research. AI technology and institutions with the resources to deal with the dramatic economic and political disruption (particularly to democracy) caused by AI.

As US President Joe Biden recently showed when he asked to meet with the CEOs of America’s largest AI technology firms, governments too are increasingly concerned.

But we are no longer in 1939 or 1945. Over the years, the development of international research work has exploded in all disciplines. This growth has particularly affected fields associated with the humanities, and in particular those disciplines directly linked to the fundamental understanding of human behavior and thought.

In 2012, an Imagining Canada’s Future foresight exercise predicted that the humanities would play a central role in responding to the impacts of emerging technologies and mitigating global issues that will significantly affect Canada in the course of the next decade. With the pace of technological advancement and, above all, given the rapid development of AI and quantum technologies, we need it more than ever. We also need to listen and learn to better educate society about the impact of technology and better regulate its safe and responsible use. It is not too late.

Over the past few years, the main Canadian organization for funding research in the human sciences has funded several projects that can inform us in this regard. Teresa Scassa, from the University of Ottawa, who studies artificial intelligence, law, data governance, data privacy and the legal aspect of web harvesting, Eran Tal, from McGill University, who works on the ethical and social implications of big data and machine learning algorithms, and Benoit Dupont, of the University of Montreal, who conducts research to enable companies and parapublic organizations to better counter cyberattacks thanks to the metrics of resilience: these are just some of the projects that will make it possible to better regulate the use of AI.

Science, technology and their methods of application must be seen as complementary and integrated processes. This means that we must work together from the advent of a new technology in order to ensure careful integration of scientific advances, the methods of their implementation, their probable consequences as well as the necessary legal and jurisdictional elements. We must channel all the energy and creativity of our research work, from the natural sciences and engineering to the health sciences and the humanities. Our present and our future depend on it more than ever.

To see in video


source site-45