The latest feats of artificial intelligence (AI) systems point to new possibilities for our societies. At the same time, they lead us to reflect on the responsible use of AI, so much so that in recent weeks, several debates have emerged regarding the normative framework for AI in the world and in particular in Canada, through the Artificial Intelligence and Data Act (LIAD), part of Bill C-27.
However, the adoption of this law does not achieve consensus. For some, especially belonging to the small world of Canadian AI, it is urgent to adopt the LIAD. In fact, they consider that its current version presents “a fair balance between the protection of the Canadian population and the imperatives related to innovation”. Only, it should be emphasized here that these computer specialists, among whom are the leaders of private companies, have interests, particularly economic ones, to campaign for the rapid adoption of this bill.
On the other hand, other researchers, mostly from the social sciences, responded by advocating not to rush, arguing that the law should on the contrary be significantly improved before it is adopted. My recent comparative analyzes on the LIAD, the proposed EU AI legislation and the US Algorithmic Liability Bill lead me to the same conclusion. It should also be remembered that adopting it too quickly could be problematic insofar as it is very difficult to modify a law once it has entered into force.
It is clear that the European and American measures are for the moment sharper than the Canadian LIAD, which is far from being at the forefront, as the promoters of the current version of the bill write. For example, unlike European legislation on AI, the LIAD does not prohibit the creation of social rating systems for natural persons at the initiative or on behalf of public authorities.
As the second group of researchers reminds us, it does not condemn the prohibitive systems of remote and real-time biometric recognition either. However, these two practices infringe the principle of non-discrimination and the right to dignity. They go against the values of justice and equality and interfere with the freedoms and rights of the people concerned. It is therefore urgent that the LIAD prohibits systems intended for similar practices and integrates these specificities.
The LIAD should also strengthen its requirements with regard to the governance of “algorithmic” data. As it stands, they mainly focus on how this data is anonymized and the use of anonymized data found in AI systems. In comparison, European legislation is concerned with analyzing potential biases and how data is chosen, collected and used. The LIAD should, at a minimum, do the same.
In terms of transparency, the LIAD should go further and encourage openness, i.e. access, to the data — including learning data — used by the algorithms whose decisions are likely to have an impact on human rights and/or be ethically questionable, in compliance with industrial secrecy (access conditions that may be enacted) and regulatory restrictions.
Machine learning algorithms requiring, upstream, training through the use of data playing a fundamental role in their ability to deduce, it is also essential to promote access to the databases at the origin of this learning and the terms of their selection. This would be an effective and legitimate way to monitor the scope of these AI systems and understand their biases, if any.
Transparency through the opening of algorithmic processing data, or the creation of what I call “open algorithmic data”, would ensure that AI systems are part of a dynamic aimed at preventing a part substantial amount of present and future technological drifts. Ultimately, this represents an opportunity for the Government of Canada, which promotes open data and pro-tech policies, to redefine the rules of the game.
Maintaining Canada’s leadership in responsible AI and protecting the integrity of Canadians, while fostering technological innovation, also requires flawless regulatory requirements. If Canada really wants to establish itself as one of the pioneering countries in terms of the normative framework for technologies, thereby marking this crucial moment in history, it must strengthen the LIAD, and not adopt its version. current.