Remember the time, not so long ago, when the concepts of truth and falsity were the business of philosophy teachers?
It was the time when, in an institution like mine, the most curious students indulged in absurd thought experiments on themes that opposed the real and the unreal while the others, down to earth land, patiently awaited the end of the course in order to better mobilize their efforts for lessons in economics or law, likely to have some use in their lives. It was the good old days.
But now, with the arrival of artificial intelligence (AI), things have changed a lot. The letter of March 29, co-signed by several developers and imminent intellectuals (Musk, Wozniak, Harari and others) calling for a moratorium on the development of AIs more powerful than GPT4, testifies to the depth of the societal upheavals which we must now reflect on. .
We are not, as some would have us believe, mere conversational robots, but entities of incredible power whose disruptive effects in a multitude of fields of human activity are still, to a large extent, unknown.
Whether we think of this viral image of the Pope wearing a stylized coat created by Midjourney or the interview featuring Justin Trudeau and Joe Rogan created with the help of another AI, the same observation is essential. . The risks of misuse are very real. Scenarios like the release of a video of Vladimir Putin announcing that World War III has begun or the arrival in your inbox of a convincing photo of yourself in action with a prostitute( e) no longer require technical progress as much as assembly work from existing possibilities.
When these AI-engineered images, videos, and text circulate through inboxes, news feeds, and social media, who will we trust? If many evoke a possible civilizational fall, it is, among other things, because of an attack on this concept which serves as the cornerstone of our societies: the truth. We thought it was the business of philosophers, we realize, with the arrival of these entities, that it ensures social cohesion, allows the maintenance of the rule of law and participates in the proper functioning of the economy.
Plagiarism and fraud
If the possibilities associated with the malicious use of AIs are not a concern, you should know that it is absolutely not necessary to have people invested in a desire to do harm to produce deleterious effects. This is where things are more troubling.
The question of fraud and plagiarism within schools, extraordinarily facilitated by the arrival of AI, has rightly caused much ink to flow in recent months. One of the fundamental functions of the education system is to certify the acquisition of certain abilities and skills before someone is authorized to perform certain actions and occupy various critical functions in society.
Faced with the new possibilities offered online, several of my colleagues have returned to in-class assessments, the only way we can see of guaranteeing that the test to which our students are invited has been passed thanks to their own abilities.
However, Cégep à distance has the wind in its sails and currently accumulates tens of thousands of registrations annually.1. How do you maintain fairness in a remote evaluation process? What are the diplomas acquired in such a way worth now? Within the walls of the very real establishment where I teach, several students recently confided to me, faced with their lower grades than those of their colleagues who had used AI to carry out their work at home, that they too, future, would use their services. Are they bad students or young realists who rely, given the gap that prevails between the ways of doing things of an outdated system and the current world, to a relevant adaptive strategy?
Even more serious is the jovial attitude that has taken hold of many within our institutions. Thus we hear certain pedagogues proclaiming, these days, that the declarative knowledge to which we once granted importance must now be replaced by new capacities for critical thinking on which we should bank in the inevitable relationships that we will deal with these machines. What is googled would therefore no longer have a place in an examination.
A question like this… How the hell is a person supposed to be able to think critically about an answer obtained by an AI if they don’t have, at the base, a solid body of declarative knowledge necessary to the construction of a more elaborate knowledge, itself involved in the mobilization of this high-level intellectual capacity that is critical judgment?
The arrival of these AI forces us to ask questions that we have swept under the rug for too long, those that concern the foundations of our society, that lay the foundations of the education system. In this new context, what does this ideal that we are pursuing, consisting of leading individuals to think for themselves, mean? How exactly should thinking for oneself manifest itself? What place should we give to general culture? If it is technologically possible to virtualize learning and attach it to such technological powerhouses, is it desirable? What meaning should we give to this current idea of adapting to AI?
Knowing that it is impossible for us to put the genie back in the bottle and that the handcuffs, if they hold, will not be able to contain it for long, it is urgent for our society to give itself guidelines. Adapting means nothing. Improvisation and tinkering cannot go on any longer. We need to determine what matters to us and develop a plan to get there.