Artificial intelligence: a legal framework to establish

This text is part of the special section Artificial Intelligence and Cybersecurity

Artificial intelligence (AI) amazes as much as it frightens. Far from the heated debates between technophiles and technophobes, the legislator must establish effective and appropriate rules allowing our societies to take advantage of all the promises of these new technologies, while avoiding abuses.

Artificial intelligence did not wait for the breakthrough of ChatGPT to revolutionize our lives. In the field of health, AI has already changed the game in terms of detection and treatment of pathologies. “If caregivers can be helped in their work by algorithms, we are able to treat more patients, more quickly, rather than making them wait in a corridor,” underlines Philippe Goulet Coulombe, professor in the Department of Economics at the UQAM, specializing in machine learning.

The transport sectors, through autonomous vehicles, the environment, with the development of tools combating the waste of resources, and learning, through the prevention of school dropouts, have also greatly benefited from these new technologies. But this progress cannot mask the legitimate fears relating to the potential abuses of artificial intelligence.

“There are real dangers and there is clearly work to be done,” analyzes Philippe Goulet Coulombe. The legislator must ensure that there are as few disadvantages as possible, so as to facilitate the transition to the widespread use of these technological advances. » By interfering in all sectors of human activity, AI has nevertheless seriously complicated the regulator’s task.

Specific issues

“This is where the difficulty lies for jurists,” confirms Anne-Sophie Hulin, professor at the Faculty of Law at the University of Sherbrooke and holder of the Research Chair on Social Justice and Artificial Intelligence. “We are obliged to have a transversal approach, but we cannot separate ourselves from a sectoral approach because in each particular use that we are going to have of AI, there are going to be more specific problems. »

In the field of creation, generative AI has raised lively debates in terms of intellectual property, by challenging the very notion of copyright. In the health and education sectors, it is the use and protection of personal data necessary for the proper functioning of algorithms that poses serious problems. In industry, concerns relate more to the loss of jobs, with the ongoing automation of administrative tasks.

How can the law understand the use of AI in all its diversity while maintaining overall cohesion? “There are major advances from a legal point of view that are being made,” replies Anne-Sophie Hulin. There are already rights that exist and which govern AI. What will be debated soon is the question of whether special AI law is needed to govern the development of these tools. »

The quest for technological neutrality

The other major challenge is being able to adapt to technology that is evolving very quickly. “We must ensure that the law remains sustainable and find the right level of abstraction,” says Anne-Sophie Hulin. This is called technological neutrality. The principles that we are going to establish in the framework of AI must last a long time. What’s difficult is that we don’t know at all what the future will hold. »

It has been suggested that a moratorium be imposed on AI. But this idea, which is far from unanimous, seems difficult to apply. “It has never happened in history that a technological advance is put back in the drawer,” says Philippe Goulet Coulombe. Failing to be able to catch up with a train that has already left, it seems more judicious to deepen the collaboration between lawyers and engineers to find lasting solutions.

A multidisciplinary reflection to be carried out

“For a law to be effective, it must be adapted to operational realities. Dialogue between AI developers and lawyers is therefore absolutely necessary,” explains Anne-Sophie Hulin, welcoming the Montreal Declaration for responsible development of AI which laid the foundations of this collaboration in 2017. “A general ethical framework was established by bringing together the plurality of actors who revolve around the AI ​​ecosystem. »

The Quebec Innovation Council (CIQ) has embarked on the same path, recently launching public consultations on AI. A reflection led by experts from all backgrounds, including Anne-Sophie Hulin responsible for thinking about “a specific AI governance framework”, alongside the president of the Order of Engineers of Quebec, Sophie Larivière- Mantha.

The objective of these consultations is to ultimately establish values ​​and principles intended to guide the adoption of new laws, such as the adaptation of existing standards. “When we look at history, each innovation has always been a source of concern,” confides Anne-Sophie Hulin. “But we are not without recourse, as a lawyer. It’s interesting to have a looming debate about what AI should be. »

This content was produced by the Special Publications team at Duty, relating to marketing. The writing of the Duty did not take part.

To watch on video


source site-40

Latest