Like the United States, Ottawa wants to convince the main Canadian technology companies to voluntarily subscribe to a code of conduct to avoid the abuses of artificial intelligence. The Minister of Innovation, Science and Industry, François-Philippe Champagne, took advantage of the opening of the ALL IN summit in Montreal on Wednesday to make this announcement.
A few companies, notably OpenText and Cohere, are already among the signatories of this “voluntary code of conduct to regulate advanced generative AI systems”. This non-binding code is structured around six principles, the objective of which is to “assure Canadians that the AI they use is safe,” declared the minister.
Signatory companies must in particular commit to putting in place risk management systems, to identifying bias, to being transparent in their use of AI, to having humans monitor activities and to ensuring that these systems are reliable and secure.
“It’s a voluntary code,” Minister Champagne agreed at a press briefing shortly after the announcement. It’s a bit like what was done in the United States: you may remember that President Biden asked the technology giants to come to Washington, to subscribe to a code. »
“An important step” for Bengio
The Trudeau government’s main piece of legislation in terms of regulating AI, Bill C-27, was tabled in June 2022. It is currently being studied by the Standing Committee on Industry and Technology and will likely not be not adopted for several months.
Canada will probably be the first country in the world to have a law that will govern the protection of personal information and AI. But first, we wanted to have a code.
François-Philippe Champagne, Minister of Innovation, Science and Industry
Present on stage at the time of the announcement, researcher and world authority on AI Yoshua Bengio insisted on the importance that decisions regarding this technology are not made “behind closed doors, under market pressure”. He believes that this code of conduct is “an important step in this direction”.
“It’s really important that we invest and protect the public, to ensure that AI works for the general good of society,” he added. And we must do it with our international partners, because AI has no borders. »
Recognized as a global hub for university research in AI, Montreal has been a pioneer in ethics since 2017, the year of the publication of the “Montreal Declaration for Responsible Development”. Most recently, in March 2023, Mr. Bengio was one of the most active promoters of a six-month moratorium on research, a petition signed by tens of thousands of experts and luminaries around the world.
Towards a creative divide
These concerns related to AI were very present Wednesday in the programming of ALL IN, a summit which was in its first edition and presented as “the largest Canadian gathering on AI”. It was announced in the morning that more than 1,400 people had registered for the summit, most of whose conferences could also be followed online.
Two conferences, in particular, dealt with the impacts of AI on culture and the upheavals experienced by creators. The Argentine philosopher, publisher and entrepreneur Octavio Kulesz notably spoke of a “creative divide” of a new type, made more serious with the advent of generative AI like Chat GPT or Dall-E.
“It was said, in the 90s and 2000s, that everyone, rich countries or not, was equally creative. The problem now with these generative technologies is that developed countries have access to much richer technological structures than developing countries. People who don’t have access to these machines will be less creative. »
The links between AI and culture go both ways, he explained: AI can create works, but it is fundamentally driven by produced writings, images, songs or videos. by humans. “It’s a reciprocal impact […] Culture is an “output”, but also an “input”. If we dig deeper, we see that the real pioneer is not science, but art, culture. »