Between minimal and unacceptable risk, there is only one step… Now that artificial intelligences (AI) like ChatGPT are public, Canada and the European Union are rushing to adopt regulatory frameworks that will avoid slippages made possible by these new technologies.
Artificial intelligence specialists who are calling for a moratorium on the development of AI will be happy to learn that they have the support of the federal Minister of Innovation, Science and Economic Development, François-Philippe Champagne. In a press scrum on Wednesday, he responded positively to the open letter signed by several hundred personalities from the world of technology who are asking for better supervision of AI, even if it means imposing a six-month break on the development of the most powerful systems.
“We have to give ourselves a framework, I completely agree,” declared François-Philippe Champagne. “This is what we propose in [le projet de loi] C-27: There’s a whole chapter on AI framework. We will be the first country with such a framework. »
At the rate things are going, Canada could actually pass, first in the summer, its Artificial Intelligence and Data Act (LIAD), which is part of Bill C-27 to broader on the management and use of personal information in the digital age.
“It has the potential to be a very interesting law,” said the full professor at the Center for Research in Public Law at the University of Montreal Pierre Trudel. “It could give a head start to Canadian innovation: applications or artificial intelligence software produced here would comply with a law that guarantees a very limited risk for their users. »
Manage risk
Canadian and European laws are very similar. They are based on the risks that artificial intelligence systems may entail.
European law introduces four levels of risk. Minimal risk presents no danger. It could be an automated video game character. A limited risk appears if there is potential to be manipulated, for example via a chatbot. AI that undermines a person’s security or fundamental rights would be high risk. Algorithms that automate a company’s hiring process would fall into this category. Such systems have been shown to sometimes harbor a negative bias towards certain individuals.
A state-run social rating system or unauthorized biometric identification by law enforcement is deemed unacceptable. Such AI will simply be banned by the European Union.
In Canada as in Europe, all actors in the development chain will be held responsible for the use of AI: from the programmer of the algorithms to its users. “Nobody can wash their hands of it,” says Pierre Trudel.
With us, an AI commissioner will be appointed once the law comes into force. It will be able to intervene and limit the application of new technologies. It will work with the Privacy Commissioner of Canada, the CRTC and any other organizations that may be affected.
“That too is new. In general, federal agencies work in their own corner,” says Pierre Trudel.
China in the line of sight
The United States will also have to collaborate, because European law will have extraterritorial scope: as soon as an AI is used on European soil, its creator will have to vouch for it, whether American, Chinese or other. The EU is threatening financial sanctions of up to 6% of its worldwide turnover for a company that is at fault.
“It’s not an easy task,” says lawyer for the Montreal firm Borden Ladner Gervaix and specialist in respect of privacy and personal information, Anthony Hémond. “Europe wants to be able to pin down companies that would be established in China, for example. »
Whether they will comply remains to be seen. Because their task will not be light: the way they market their technology will change. It could be like how drugs are first tested and then approved by organizations like Health Canada before they hit the market.
“A developer will have to detail his code so that the next one can understand how to apply it to avoid abuse,” warns Me Hémond. “It will not be an easy task. »