Sandwich shops, AI and social networks

While the Quebec Innovation Council will hold the Public Forum on the supervision of artificial intelligence (AI) in Montreal on Thursday, a call from AI luminaries reminds us of the urgency to act. In a manifesto released ahead of the international summit on AI security being held this week near London, Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song, Yuval Noah Harari and several other experts urge the population warns against a laissez-faire attitude towards the development and use of these technologies. British computer scientist Stuart Russell laments that “there [ait] more regulations on sandwich shops than on AI companies.”

These researchers believe that the latest AI models are too powerful and too important to be allowed to develop without democratic control. Investments in security must be rapid, because AI advances much faster than the precautions taken.

Experts are urging governments to dedicate one-third of their AI research and development (R&D) funding and businesses one-third of their AI R&D resources to the safe and ethical use of the systems. They are calling for laws to require independent auditors to be able to access AI laboratories. AI companies must have an obligation to adopt rigorous security measures if dangerous capabilities are detected in their models. The researchers also advocate that tech companies be held accountable for predictable and avoidable harm caused by their AI systems.

Other recommendations concern mandatory incident reporting when AI models show alarming behavior. There are also calls for measures to prevent dangerous AI models from reproducing. Finally, regulators should have the power to suspend the development of AI models that present risks of dangerous behavior.

Ultimately, what is being demanded is that state authorities equip themselves with the means to understand and anticipate the societal issues of deploying AI. In other words, moving from the blissful incantation of the dogma of “it is forbidden to restrict innovation” to an approach which recognizes that true innovation is that which is deployed while respecting the rights and freedoms of individuals and communities. .

Anticipate changes

Like several other technologies with high intrusive and disruptive potential, AI must be the subject of guidelines designed not in a fussy manner or to satisfy lobbies, but based on cutting-edge expertise in order to predict problems of proactively and intervene with measures and regulations adapted accordingly.

The challenge is significant, because political decision-makers often have little access to independent expertise enabling them to understand the challenges of such powerful technologies. Several States have neglected to maintain their expertise capable of understanding the functioning and challenges of devices in the connected world.

The delay of public authorities in consulting on the implementation of laws imposing conditions on technology companies has already caused a lot of damage. We only have to look at the distressing civil disobedience operation that Meta is currently carrying out against Canadian laws or the abuses of its social networks which are the subject of legal action brought last week by 41 American states. .

By failing to impose the same requirements on digital companies as those that apply to others, States find themselves at the mercy of the mood swings of the Web giants. These days, it’s Meta who censors Canadian media. We can’t imagine what will happen when other multinationals, for example those with a near-monopoly on cloud computing solutions, begin to blackmail the governments who have subcontracted to them the storage of their citizens’ data.

In short, States must develop independent capabilities to monitor technology trends, the questions they raise, and the risks to which AI and other information technologies expose populations. With technologies as powerful as AI, we cannot afford to wait until the damage is proven to begin to consider measures to remedy the abuses.

The difficulties we are currently experiencing in applying to Web giants measures such as those provided for in Bills C-11 and C-18 on the media demonstrate the disastrous consequences of the laissez-faire policies practiced over the last thirty years. But this is nothing compared to those that must be anticipated if States continue to slavishly rely solely on commercial companies to ensure the development of technologies as powerful as AI.

Professor, Pierre Trudel teaches media and information technology law at the University of Montreal.

To watch on video


source site-39