A leading artificial intelligence pioneer believes the Canadian government should act urgently to regulate this technology, because its current trajectory presents major societal risks.
Montreal professor Yoshua Bengio, nicknamed one of the “godfathers of artificial intelligence” (AI), told a Commons committee on Monday that Ottawa should immediately put in place a law, even if imperfect.
The founder and scientific director of Mila, the Quebec Institute of Artificial Intelligence, affiliated with the University of Montreal, affirms that a “superhuman” machine, as intelligent as a human being, could be developed within twenty years , or even a few years. And “we are not ready” to face it, according to this world authority on AI.
One of the near-term risks of AI is the use of hyperfaking to spread disinformation, he said.
Hyperfaking already makes it possible to broadcast videos that give the impression that a public figure is saying something they did not say or doing something that never happened.
The technology can also be used to interact with people through text or dialogue “in a way that can trick a social media user into changing their mind on political issues,” Bengio said. .
“There are real concerns about the use of AI for political purposes, which run counter to the principles of our democracy. »
Furthermore, there is concern that in a year or two, more advanced systems could be used for cyberattacks. AI systems are getting better and better at programming. “When these systems become powerful enough to defeat our current cyber defenses and industrial digital infrastructure, we will be in trouble,” Professor Bengio said. Especially if these systems fall into the wrong hands. »
“Show a white paw”
The Commons Standing Committee on Industry and Technology, where Professor Bengio testified, is currently studying the Liberal government’s bill that would update the Privacy Act and begin to regulate certain systems of artificial intelligence.
The bill as currently drafted would give the government time to develop regulations, but Professor Bengio believes some provisions should come into force now. “With the current approach, it would take about two years before the law could be implemented,” he stressed.
One of the first rules he wants to see implemented is a registry that would require systems with a specified level of capacity to report to the government, “where companies have to show their credentials, show that their product will not not create a nuisance.”
As with drugs, for example, this approach would place the responsibility and cost of demonstrating safety on the large technology companies that develop these systems, rather than on taxpayers, Bengio points out.
“Until now, IT has escaped all of this, because we thought that it would not have a big impact on society. But here we come to a point […] where we need supervision. »
Asked about the virtues of corporate self-regulation, Bengio said it could be “a good intermediate step […] because it can be put in place quickly, and then the companies can consult with each other to try to arrive at standards.”
“But there are going to be bad players, and also an incentive to cut corners if we don’t have equal rules for everyone that are mandatory. »
“General purpose” systems
Bill C-27, introduced in June 2022, aims to target what is described as “high impact” AI systems.
Mr. Bengio believes the government should amend the definition of “high impact” in the bill to include technologies that pose threats to national security and society.
This could include any AI systems that malicious actors could use to design cyberattacks and dangerous weapons, or systems that find ways to self-replicate despite programming instructions to the contrary.
Generative AI systems like “ChatGPT,” capable of creating text, images, and videos, emerged for widespread public use after the bill was introduced. The government says it plans to amend the bill to take these technological advances into account.
Liberals say they aim to force the companies behind these systems to take steps to ensure that the content they create is identifiable as AI-generated.
According to Professor Bengio, it is “very important to cover general-purpose AI systems, because they are also the ones that could be the most dangerous if misused.”
Catherine Régis, a professor at the University of Montreal, also told committee members Monday that the government must act urgently, citing recent rapid developments in AI.
She stressed that regulating AI is a global effort and that Canada must position itself if it wants to have a voice internationally, where decisions will be made. If Canada wants to play a credible structuring and influential role in global governance, it will first have to establish a clear and solid vision at the Canadian level, according to Professor Régis.