US President Joe Biden on Monday adopted new ground rules and guardrails for the growth and development of artificial intelligence (AI). Canadian experts now hope that Ottawa will take a cautious but complementary approach to this growing issue.
The executive order, billed as the most comprehensive government action on artificial intelligence in the history of the technology, covers a wide range of areas, from public and national security to the protection of individual rights.
To both realize the promise of AI and avoid its many perils, the technology must be carefully regulated, Mr. Biden said at a White House ceremony.
“We are facing a truly defining moment in history — one of those moments where the decisions we make in the very short term will set the course for the coming decades,” he said.
“There is no greater change I can think of in my life than the one that AI presents as its potential: exploring the universe, fighting climate change, ending cancer as we know it, and much more. Moreover. »
Canada was one of the main allies of the United States that the White House consulted in recent months to develop this new regulatory framework, indicated in a press release the Minister of Innovation, François-Philippe Champagne, who will participate in this week at the AI Security Summit in the UK.
Canada’s national AI strategy was released in 2017 and last month Ottawa launched a new voluntary code of conduct for the development of advanced AI systems, Minister Champagne added.
National and public security
Under the executive order signed by Joe Biden on Monday, AI developers will be required to share with the US government the results of their security research and testing when that work focuses on areas that pose a risk to national security, public safety or the health of the American economy.
The order also creates a new AI Safety and Security Council, under the auspices of the Department of Homeland Security, which will assess threats to critical infrastructure, as well as any chemical, biological, nuclear, or cybersecurity hazards .
The Commerce Department will establish new rules for “watermarking” and document authentication, so that any content generated by AI is clearly identified as such, in order to mitigate the ever-increasing risks misinformation or fraud.
President Biden, who for several months has been both captivated and concerned by the potential of this technology, briefly went out of his text on Monday to describe how “hyper-rigged videos” can be produced in just a few seconds from content authentic.
Regulate without hindering innovation
Mark Daley, who earlier this month began a five-year term as Western University’s first-ever director of AI, recognized the challenge of the fine line between the technology’s promise and its potential. potential dangers.
“It’s extremely difficult to find the right balance between taking very real societal concerns about security seriously, while not killing innovation,” Daley said.
Canada has already taken preliminary steps that are broadly in line with the direction the White House is taking — which Professor Daley says should give Ottawa some leeway to avoid the risk of stifling innovation.
Such an approach would be appropriate, he added, given the important role Canada plays in developing some of the foundational AI technologies. “There is an opportunity for Canada to move more towards innovation,” he said.
“And I think that’s appropriate because Canada is in part the birthplace of deep learning technology, which is so attractive right now. »
But some fear this rapidly evolving technology is falling into the wrong hands and say Canada, in particular, is moving too slowly when it comes to regulating the growth of AI.
Yoshua Bengio, a prominent Canadian AI pioneer in Montreal, recently lamented the slow adoption of Bill C-27, designed in part to regulate this technology.
The bill passed first and second readings in the House of Commons earlier this year, but has since been stuck at the parliamentary committee stage.