[Chronique] Artificial intelligence and us

Last week, thousands of people signed a letter demanding a six-month hiatus from artificial intelligence (AI) development. Among these signatories was Yoshua Bengio, founder and scientific director of the Quebec artificial intelligence institute Mila, one of the pioneers in the development of deep learning. ChatGPT, for example, is an AI tool that uses this technique.

Since then, voices have been raised everywhere asking for the same thing.

Artificial intelligence learns according to the data transmitted to it and according to the way in which the algorithm has been constructed. Which means that an AI tool can become very useful and benevolent as well as turn into a monster that learns from lies, negative information or violence. This is where too rapid and unsupervised development can be a danger for our societies, humanity and all living beings.

I told you last week about a survey on the perception that citizens have of the use of data by cities and states. In this study, we also learned that only 56% of Quebecers had a positive feeling towards AI. Even more, 78% of them think that humans will become dependent on artificial intelligence and, in a proportion of 71%, that it will escape their control.

Although this digital tool is taking up more and more space and can be very useful, everywhere it arouses mistrust. Hence this request for a break. When even the very “friendly” Elon Musk participates in this desire to put the AI ​​on hold, it means that there is really a problem. I would even dare to say that it may already be too late.

Concretely, what can we do in the face of this uncontrolled acceleration of a tool that can, in time, take control of several aspects of our lives? We demand that governments and competent authorities intervene as soon as possible. Fortunately, many of them are also alerted by this rapid development and seem to want to act. It remains to be seen what the consequences will be. The problem that emerges is that all these reactions are completely disorganized and without real coordination.

To be able to have some control over the development of artificial intelligence tools, it is necessary to act primarily on two fundamental elements: data and algorithms.

To be effective, a tool that uses deep learning needs a lot of data. Which brings us back to the quality of this data, which will have a direct influence on the results of using AI. When we talk about quality, this includes criteria such as ethics and social and environmental responsibility.

We must also dwell on the programming of the algorithms. The biases, prejudices or human intentions behind this programming work must be carefully framed. Even if we have tons of quality data, what happens next depends on what we’re going to do with it. Without regulation of the programming work, it is useless to envisage any control whatsoever. Which brings us back to ethical and responsible principles.

Who should determine the principles underpinning the AI ​​framework? Governments and cities lack resources in this area. Worse, the level of literacy of those who must write these charters and these laws is often insufficient.

It must be remembered: in the digital field, governments do not play on equal terms with private companies. It is imperative to turn to researchers, academics and scientists who have already reflected on the subject.

Moreover, examples already exist. Take the Montreal Declaration on Responsible AI, launched by Yoshua Bengio. This document is an essential guide, which can very well accompany legislators. Montreal was also inspired by it to write its Digital Data Charter, just like the work done by the City of Nantes in France. It should be emphasized: Europe is one step ahead of North America in terms of regulating the use of data. This is why states and cities must coordinate better for actions to be effective.

Will this be enough to stem the drifts already well underway? Of course not, but at least these would be gestures that could help induce some control and reduce the risk of losing control.

I am not naive; I know very well that some States will never want to go in this direction and that this will have the effect of reducing or even destroying our efforts. On the other hand, we cannot either adopt a posture of abdication by telling ourselves that all is lost in advance.

I support the increased use of data and AI to improve government performance. It is even a very promising avenue, if only in terms of ecological transition and resilience. However, without strong and rapid government intervention to regulate practices, it will be a waste of time. Legislators must do what they have lost the habit of doing in the field of economics: regulate practices.

To see in video


source site-45