[La chronique de Normand Baillargeon] AI at school, a famous dilemma

This week, two important announcements were made in the vast dossier of technology in schools. To begin with, we learned that all fifth and sixth year students at the First Seigniories service center will receive a laptop computer by June. The goal of this program is to promote success.

It is this same objective that, in its own way, is aimed at by a project that the Ministry of Education presents as an “unprecedented digital shift” and in which he is investing $10.6 million over two years. This time, we are focusing on artificial intelligence (AI), through which we will seek to improve the academic success of students by identifying those who are at risk (of failure and dropping out) in order to intervene effectively with them.

This vast file of technologies at school, those of yesterday as well as those of today (computer), but especially those of tomorrow (AI), is a real intellectual minefield.

The minefield

To move forward with a minimum of security, many things would be needed. For example, one would have to be familiar with the domain concerned — computing and AI, here — and what it can and cannot promise, if that is known at all. It would also be necessary to know, where there are any, the results of credible research concerning the proven effects, in particular on pupils, of such and such a practice; compare, where applicable, these results with those obtained or that could have been obtained without using these new tools; and consider the costs of the practices implemented, both financially and in terms of their effects on school personnel.

In all these cases, we should keep our poutine detector on high alert in order to guard against self-interested research, not to neglect to take into account the weight of the economic interests at stake and their impact, and to assess how There is a certain technophilia in all this that often nothing justifies. Without forgetting to think in the long term about situations where relatively positive effects can, along the way, become less so.

It’s no small task, and I admit I don’t have the necessary skills to do all this work. Take AI: I am mainly interested in it from a philosophical point of view and question myself with others on what it can bring to the problem of consciousness. To quote a famous bon mot, is it artificial intelligence or natural stupidity? But all this is not of much help to think about the possible repercussions of AI in education – or elsewhere. On this subject, among authors who seem to know about it, the conclusions vary completely and tell us who is hell, who is paradise, who is purgatory.

I don’t think I’m alone in being unable to speak confidently about anything I’ve listed. That said, when it comes to education, some things can be said with relative confidence.

History shows, for example, from the distant audiovisual to the computer and AI through interactive whiteboards and other gadgets, that we have often been guilty of technophilia. It also shows that large sums have sometimes been spent unwisely and that commercial pressures were not always the cause.

Research shows that the benefits of computers on learning are small and that distance learning is a lesser evil that should only be resorted to when circumstances require it. Finally, time has also made it possible to discover harmful, and sometimes even very harmful, effects of computers.

I would like to suggest here (very modestly) a way to advance a little more surely in our minefield. A way that is inspired by a famous dilemma concerning technology.

Confronting Collingridge’s Dilemma

This dilemma, formulated in 1980 by David Collingridge, puts the finger on what seems to me to be at the heart of the problems I raised above. According to Collingridge, we face a double bind in implementing new technologies. On the one hand, we have a problem of information, of knowledge, which is that we will not know the real impact of these technologies until they are widely implemented. On the other hand, we have a problem of power: when the technologies are implemented, their control becomes very difficult, like the change that we would like to bring.

Possible solutions seem to me to be necessary as soon as the problem is posed in this way. Here are a few, some of which are already (a little?) in place.

One should seriously consider what the research says before launching something.

We should immediately and constantly assess the effects of what we put forward.

We should ensure that these evaluations are free from political, ideological or commercial partisanship.

To do this, we should receive all points of view on the subject and share all the knowledge gathered in this way, no doubt from an autonomous structure designed for this purpose.

We should, finally, and this is newer and more important, ensure that the instigators of change share the risks and losses of all kinds, and therefore also the economic losses, which would occur if necessary.

To see in video


source site-43