Ottawa has decided to imitate Washington. The Minister of Innovation, Science and Industry, François-Philippe Champagne, took advantage of a conference on artificial intelligence applications held in Montreal this Wednesday to unveil a code of conduct with business people voluntary to avoid slippages in the use of technology by Canadian companies.
The minister was at the Palais des congrès de Montréal as part of an AI conference called All In to present his “Voluntary Code of Conduct for the Responsible Development and Management of Advanced Generative AI Systems.” This code is structured around six fundamental principles that companies which decide to adhere to it must ensure they respect.
This code is intended to be a temporary measure pending the adoption by the Canadian Parliament of the Artificial Intelligence and Data Act, which is part of Bill C-27 initially tabled by the Trudeau government in June 2022. The project Bill C-27 was adopted at second reading in the House of Commons last April. It is currently being studied by the Standing Committee on Industry and Technology.
“Through the Code of Conduct, Canadian companies will take the lead and adopt responsible frameworks for advanced generative AI systems, which will strengthen security and trust as the technology develops,” said on on stage Minister François-Philippe Champagne.
With this announcement, Ottawa is imitating the approach adopted last July by the American federal government. Earlier this summer, US President Joe Biden said he had agreed with tech giants with a strong presence in AI, including Amazon, Google, Meta and Microsoft, to impose protections against non-responsible use of the technology. .
Six principles
The voluntary code of conduct which comes into force immediately is based on six principles which aim to reassure the public about the use of AI by Canadian companies.
These principles range from holding organizations accountable for using AI in their businesses to mitigating the risks associated with handling the data and personal information needed to power this technology. The Code of Conduct also requires them to assess and eliminate bias that could arise through the automation of certain processes and that could harm certain subgroups of the population. It demands transparency about the nature of the AI systems that organizations use and the content they generate. Finally, it requires assurance that these systems are supervised by a human and that all measures are taken to protect against a possible cyberattack or data leak.
Present on site, representatives of the Canadian technology sector and the business world were satisfied with the attitude of the federal government in regulating AI. “It is essential that our industry addresses key issues, such as the risk of bias, by ensuring that humans maintain a clear role in monitoring and controlling this incredibly promising technology,” said the CEO. and co-founder of Cohere, Aidan Gomez. Based in Toronto, Cohere is a Canadian leader in the development of generative AI.
Minister François-Philippe Champagne also sat down Wednesday morning with one of the international authorities on AI and researcher at the University of Montreal Yoshua Bengio. Mr. Bengio is one of the researchers in the world who is most concerned about the risks of a possible slippage in the use of AI by certain organizations or certain States.
In late July, Mr. Bengio appeared before the US Senate to brief the US government on the opportunities and risks posed by the next generation of artificial intelligence.
More details will follow.