Regulating AI: myths, challenges and leads

The arrival of generative artificial intelligence (AI) in the collective space, thanks to ChatGPT, has given rise to many calls to regulate AI.



Many of these calls are based on myths. The first is that AI as such should be regulated. Beware of shiny object syndrome! AI does not exist on its own: it is a technology that will be incorporated into products and services offered by companies to users. Regulating AI therefore means rather regulating the companies that offer it, the contracts or other mechanisms of this offer, and also the users who employ it. This kind of regulation is not new.

This leads to the second myth, that of the blank page: AI would evolve in a legal vacuum, in the absence of AI-specific rules that would spell out what is permitted or prohibited. Unlike fundamental freedoms in private law, the right already extends to the development, marketing and use of AI. However, this right must be developed to apply it to issues specific to AI. In this sense, regulating AI is more of an updating exercise than a literary creation.

These two myths undoubtedly stem from the culture of innovation in the digital sector, which has so far governed the development of AI. Relying on a legal framework that is permissive in the face of digital innovation, this culture wants any invention to be quickly disseminated on the market in order to break through, even if it means correcting the situation later if difficulties arise. This has served our societies well since the 1980s, but the deployment of social networks, for example, has already shown the limits of this approach.

In comparison, other sectors, such as pharmaceuticals, are governed by a more protective legal framework, where no invention makes it to market without first being tested and approved. These sectors are no less innovative for all that.

But the protective approach leads to a more cautious corporate culture, where we assess the risks of inventions well in advance to recalibrate if necessary (or abandon, if necessary), and we try to prevent abusive or malicious uses. .

Beyond the myths, the great challenge of AI regulation is therefore to change the culture of innovation in the digital sector, which requires a more protective legal framework. However, given the dynamic nature of AI and the lack of evidence, this framework will not resemble that of pharmaceuticals. A “responsible innovation” model, where the company co-manages the production and analysis of information on the socio-economic impact of its inventions with the public authority and other stakeholders, seems more appropriate.

It is towards such a model that the European Union (EU) is heading, which has taken the initiative with its draft AI Regulation, which should be adopted by the end of the year. This regulation forces producers of AI systems to ensure that they comply with regulatory principles before they are put on the market, and thus helps to hold them accountable. Canada is following in the footsteps of the EU with Bill C-27, which is less well documented, however.

In the shorter term, another legal reform may already push the digital sector to change its culture in the face of innovation. Digital has enjoyed great leniency in terms of civil liability for 30 years, which went hand in hand with the permissive approach described above. AI systems should be placed under a stricter regime, such as liability for defective products. Here again, the EU has embarked on this path, which Quebec would benefit from following.


source site-58

Latest