[Opinion] The development of AI must be the subject of a democratic debate

Artificial intelligence (AI) is heavily deployed in our daily lives, and major social changes are already on the horizon. The benefits of AI in terms of health and epidemiological and meteorological predictions are indisputable. However, the ethical and social issues that AI raises cannot be ignored.

In fact, the risks of opacity, errors, bias and discrimination of automated decision-making systems have been documented all over the world. The same is true for the surveillance potential of tools, such as facial recognition. Generative content AI — like OpenAI’s ChatGPT — also raises issues of intellectual property and privacy violations, and is a powerful means of disinformation and opinion manipulation.

Certain rules of law apply to these subjects, so that there is not, strictly speaking, a legal vacuum, but we must recognize the difficulty of enforcing these rules and the need to change them.

Bill C-27 modifies the legal framework for the protection of personal information in the federal private sector in Canada. Part 3 of this bill specifically seeks to contain the deployment of AI. This last part arouses strong criticism, which leads us to propose suggestions to improve its content and facilitate its parliamentary approval.

We strongly believe that the development of AI should be the subject of democratic debate. Parliamentary debates, in the legislative process, respond to this need by allowing the expression of various arguments and opinions. However, Bill C-27 is very incomplete. Many key concepts are not defined there, such as “high impact AI systems”. In addition, several provisions refer to future regulations, which deprives interested parties of opportunities to debate the content of the law.

However, in a companion document to the law, published on March 13, 2023, the Canadian government indicates that the future Law on Artificial Intelligence and Data (LIAD) is only a first step in the construction of a new AI legal framework system. The State would consider a consultation process of at least two years before the entry into force of the LIAD and the first set of regulations, which would thus take place around 2025.

Such a strategy would provide certainty for AI researchers and innovators, while maintaining flexibility in future regulation. This consultation process is compatible with the evolution of technology, but we insist on the importance of carrying out this consultation seriously. Indeed, AI literacy must be a political priority. Consultation should not be reserved for a handful of specialists, but rather open to the entire population, in the form of concrete information or training sessions.

Including the entire population in “the great national dialogue on AI” that we are calling for is essential to fight against discrimination and exclusion and to convince AI designers to meet technical requirements in the interest of society.

Moreover, we maintain that it is not too early to legislate. Indeed, there is no good time to do this, as it is a discipline which is in perpetual evolution and which, as a result, never provides certainties or perfect conditions for supervision. Yet it is an area that is implanted daily in the products and services we use, with no checks on compliance with the law or the common interest. Thus, delaying legislation would have direct and potentially very negative consequences for the population.

Moreover, the passage of a law would help organize the market, create certainty and bring out the stakeholders, starting with the government agencies and departments that will control AI systems. Finally, the fact of adopting a law today in no way prevents future adaptations. It will be necessary to carry out impact studies on the law and to facilitate its revision conditions to enable it to follow the evolution of technologies and the market. We therefore believe that the strategy of providing for the adoption of a general law accompanied by easily modifiable regulations is entirely adequate.

Moreover, we are of the opinion that the federal government cannot do without thinking about the sharing of legislative powers in the area of ​​AI or cooperating with the provincial governments. Certainly, the technology does not stop at provincial borders, but what then happens to AI systems deployed in areas of provincial or essentially provincial jurisdiction such as health or education? A dialogue must be launched today because of the great complexity of the intertwining of federal and provincial jurisdictions and the cross-cutting nature of this field.

Postponing the collective reflection because of the supposed complexity of the subject would be the worst scenario. In fact, foreign companies are not waiting for legislators to put AI tools on the Canadian market. It is high time to launch a national debate on the management of these technologies in order to take advantage of them, while ensuring sustainable development and the inclusion of the greatest number of citizens in the discussion.

To see in video


source site-40

Latest