Artificial intelligence (AI)? Masses of data, very often personal or private, that are swallowed, analyzed, combined and regurgitated in new forms with generally obscure aggregation and purposes.
Three adjectives here: personal, new and obscure.
And three concerns.
Is personal data protected?
Do the new forms violate intellectual protection, or do they benefit from it?
Who is responsible when an AI solution makes a harmful decision?
These are just some of the issues.
AI entails “business risks, reputational risks, legal risks”, lists Me Paul Gagnon, one of the two technology and artificial intelligence co-leaders at BCF Business Lawyers.
“A great example that demonstrates how all three can happen at the same time: internal chatbots, which are very popular,” adds his colleague Misha Benjamin.
What some people call in French a conversational agent or a dialoguer.
We take ChatGPT, we put it internally, we expose it to our data. If I work in a consulting engineering firm and I want to know how to solve a problem, rather than talking to three colleagues, I can ask the chatbot.
Misha Benjamin, technology and artificial intelligence co-leader at BCF Business Lawyers
The conversational agent will produce a summary of the information already held by the company, make links to documents relevant to the subject, and perhaps name people who have expertise in the field.
” Very useful. But if we haven’t clearly defined the governance of the thing and we haven’t planned our project accordingly, I could also ask him what the salary of my colleague opposite is, or who was reprimanded by HR. On a legal level, in terms of privacy protection, we can see the issues. »
Current law
Because artificial intelligence did not appear in a society without safeguards.
“There are plenty of existing laws that provide a legal framework for artificial intelligence even before we adopt the laws that will be specific to AI,” recalls M.e Éric Lavallée, head of the Lavery Legal Laboratory on artificial intelligence.
“We already had in Quebec the Law concerning the legal framework for information technologies, which to a certain extent regulates artificial intelligence or its use by companies. »
Labor law is also called into question.
“If we use an AI tool for hiring decisions or disciplinary decisions, then we realize that there is a bias there, it is the existing laws that will apply to that and we do not “We don’t need a new law to tell us that we are going to be responsible,” says Misha Benjamin.
Furthermore, the explosive emergence of generative artificial intelligence – capable of creating text or images, for example – raises new issues with regard to intellectual property.
We can use AI as a tool according to current laws and create something that we still own. But it could be that we use AI to invent things that we will not own 100% of, depending on the agreement with the person who makes the AI available to you.
Misha Benjamin
New legal frameworks
To face these unprecedented challenges, new legal frameworks are being built.
Last March, the European Parliament adopted what is intended to be the first law on artificial intelligence in the world, which aims to guarantee security and respect for fundamental rights while encouraging innovation.
“In certain contexts and depending on the risks involved, obligations are created, for example public disclosure or risk analysis,” describes Paul Gagnon. In Canada, Bill C-27, which is currently being studied, presents substantially the same type of risk analysis in relation to the development of artificial intelligence systems. »
These legal frameworks, existing and new, outline standards of practice and precautions that companies should now introduce in their use of AI.
“We are here precisely to help people start building this muscle – how to properly manage projects, how to do AI governance in the company upstream – so that it is well developed when the laws are applicable,” explains Misha Benjamin.
Companies can already start practicing.