[Chronique de Pierre Trudel] Discriminatory artificial intelligence

International French-speaking students are subjected to treatment that has all the appearance of systemic discrimination. Africans, especially French-speaking people, face a disproportionate number of refusals of permits to stay in Canada for study purposes. Artificial intelligence (AI) systems used by federal immigration authorities are blamed for these systemic biases.

Deputy Alexis Brunelle-Duceppe recalled this month that “French-speaking universities are arriving […] leading the number of study applications refused. It is not the universities themselves that refuse them, but the federal government. For example, applications from international students were refused 79% at the Université du Québec à Trois-Rivières and 58% at the Université du Québec à Chicoutimi. As for McGill University, […] we are talking about 9%”.

In February, the Vice-Rector of the University of Ottawa, Sanni Yaya, noted that “over the past few years, many permit applications, processed by Immigration, Refugees and Citizenship Canada, have been refused for reasons often incomprehensible and required abnormally long delays. “However, these are students who have scholarships guaranteed by their establishment and a good record. The vice-rector rightly wonders if there is not an implicit prejudice on the part of the officer responsible for their evaluation, convinced of their intention not to leave Canada once their permit has expired. of studies.

In short, there is a body of evidence leading to the conclusion that the IT decision-making tools used by the federal authorities amplify the systemic discrimination against French-speaking students from Africa.

Bent tools

This mess should challenge us about the prejudices amplified by AI tools. Everyone is concerned, because these technologies are an integral part of daily life. Phones with facial recognition devices or home assistants or even “smart” vacuum cleaners, not to mention the devices embedded in several vehicles, run on AI.

Professor Karine Gentelet and student Lily-Cannelle Matthew explain, in an article published on the website of the International Observatory on the Societal Impacts of AI and Digital Technology, that AI technologies, although often presented as being neutral, are marked by the social environment from which they are issues. They tend to reproduce and even amplify biases and inequitable power flows.

The researchers point out that several studies have shown that, if they are not adequately framed, these technologies exclude racialized populations, or over-represent them within social categories considered “problematic” or even function inadequately when applied to racialized individuals. They can accentuate discriminatory tendencies in various decision-making processes, such as police surveillance, medical diagnoses, court decisions, hiring or school admission processes, or even the calculation of mortgage rates.

A necessary law

Last June, the federal Minister of Innovation, Science and Industry introduced Bill C-27 to regulate the use of artificial intelligence technologies. The bill intends to impose obligations of transparency and accountability on companies that make significant use of AI technologies.

The draft proposes to prohibit certain conduct in relation to AI systems that can cause serious harm to individuals. It includes provisions to hold companies that take advantage of these technologies accountable. The law would ensure proper governance and control of AI systems to prevent physical or psychological harm or economic loss inflicted on individuals.

We also want to prevent skewed results that establish an unjustified negative distinction on one or more of the grounds of discrimination prohibited by human rights legislation. Users of AI technologies would be bound by obligations to assess and mitigate the risks inherent in their systems. The bill intends to introduce transparency obligations for systems with the potential to have significant consequences for individuals. Those who make AI systems available would be obliged to publish clear explanations of their operating conditions as well as the decisions, recommendations or predictions they make.

The discriminatory treatment suffered by many students from French-speaking African countries illustrates the systemic biases that must be identified, analyzed and eliminated. It is a reminder that the deployment of AI technologies comes with significant risks of renewing problematic trends in decision-making processes. To deal with such risks, legislation is needed that imposes strong requirements for transparency and accountability on both companies and public authorities. Above all, we must get rid of the myth of the alleged “neutrality” of these technical tools.

To see in video


source site-39

Latest