GPT-4, the most recent and oh so spectacular version of OpenAI’s language engine, is currently arousing heated debate.
Here are three examples.
There is concern, and rightly so, about the spread of falsehoods — which GPT-4 would produce between 20 and 40 percent of the time; we wonder what effects artificial intelligence (AI) will have on employment, fearing that they will be dramatic; and there is concern that democracy—and the free flow of ideas and free discussion, which are necessary components of it—will be seriously jeopardized if these tools fall into the wrong hands. To quote Yoshua Bengio, “we are creating powerful tools, which could fall into the hands of those who can afford them, and who could use them to better control populations”. When we know that the GAFAM have been accumulating data on us for years and that we know the ongoing debates on TikTok, there is indeed reason to be worried.
But let’s stick to education.
It is obviously not possible to predict what the AI will have in the long term as consequences on this ground. But I put forward the idea that it is already desirable to reflect on what it implies for certain categories, for certain concepts which are crucial in education and which we do not think of often enough. The result may be surprising. Judge it.
Knowing and understanding in the age of AI
One of the most fascinating things philosophy does is ask us to define commonly used concepts and get us to agree that we didn’t really know what they meant. The lesson goes back to Socrates.
Take the concept of knowledge. We use it all the time, but what does it really mean? Plato, very brilliantly, reminds us that three conditions must be satisfied for us to be able to say that we know. First, we must believe a certain idea to be true; then that this idea is true; finally, that we think it for good reasons. Knowledge, according to the consecrated expression, is therefore true (correctly) justified opinion.
I leave you to think about it, but it is clear that this has important repercussions in education. If the student knows something, he must think it, it must be true, and he must have good reason to think it. So it can’t just re-say something (taken from GPT-4, say) and in doing so, know it. We will find out by asking him to explain why he holds such a thing to be true. “GPT-4 told me so” is not a good answer.
But AI, I think, is asking us to go further. The philosopher Luciano Floridi has rightly formulated ideas on this subject which seem to me fair and heavy with practical implications in education.
For example, he suggests distinguishing between being informed and knowing. Sometimes what we learn from GPT-4 informs us. But “knowing” is something other than having information to which nothing can be added. Knowledge is linked to other ideas, other information, and generates lots of questions that either can be answered or cannot be — and in this case we must continue to seek knowledge.
It is these networks that create knowledge and that transform information into knowledge that we understand. It follows that the student who knows is not simply informed. And that we will discover it by raising all these questions that knowledge suggests. Plato’s good reasons here take on a particular meaning that this infinite mass of information that AI makes available forces us to adopt.
Another distinction proposed by Floridi also seems interesting to me for teachers who have to adapt to the dawning era of AI. He proposes in effect to distinguish between ignorance and insipience.
Insipience is when we don’t know the answer to a question, but our network around it is rich: we know what it means, that it has an answer, and what will make that will be plausible or not. For example, if, not knowing the answer, I ask GPT-4: “What is the most populated city in the world?” » I know that it will depend in particular on the definition of the word « city ». And that the answer will lead me to ask (myself) other questions.
Ignorance would be not knowing what cities are and that they have different populations. Which indicates that we can become less ignorant thanks to AI, but only if we are filled with insight.
Does this inspire you with teaching practices related to AI? Any ways to get students to use it? To avoid skids or dangers? To check if we use it wisely? Ideas for training in digital critical thinking?
Tell me about it: I’m very interested in your ideas on this subject. You can write to me at [email protected]