I made a call a while ago for everyone to hear your thoughts on artificial intelligence (AI) and ChatGPT in the classroom and in research: what does it change, what do you do with it, how do you use it — or don’t not use them, and why.
So many people have answered me that after a first column on the subject, I allow myself to come back to it. Once again, I learned a lot of things by reading you and thank you for it.
Watch out for mistakes!
We know it, we’ve been told it, but we don’t always realize it and we’re not suspicious enough: ChatGPT makes mistakes. Not in elementary mathematics, all the same? It seems so.
Antoine Van Schaftingen shows me the wrong answer given to a calculation of probabilities (how much is (5/6)^7?). ChatGPT assures that it is 0.335, showing the operations that must be done to obtain this result. Verification made by Mr. Van Schaftingen, these operations give rather 0.2791.
Informed of this error, ChatGPT apologizes, redo its calculation… and gives 0.2785 as a result! Warned of his new error, he apologizes again and recognizes the correct answer. Conclusion of Mr. Van Schaftingen: “I do not hide from you that I would not have suspected that what I took for a computer could be wrong on such a simple calculation…”
Something else. You have surely heard about the fact that ChatGPT tends to invent bibliographic references. This is serious and dangerous, especially for teaching and research…
Jean-François Dragon made a test from the ideas that an author puts forward in one of his books on a certain subject (multiple intelligences). It asks me to validate what ChatGPT says, precisely because I am the author in question. Result ?
ChatGPT is pretty good at summarizing my ideas on the subject. But, and these are not small nuances for researchers or students, he claims that some of these ideas are advanced in a book… which is false. Worse still, he attributes books to me that I did not author (I would know!) and, as a bonus, he quotes an article from an academic journal that, despite my best efforts, I cannot find. and which therefore seems… not to exist.
Jobs that will no longer be done?
We are rightly worried about the consequences that ChatGPT will have on the evaluation. Should we, for example, give up asking to do certain work at home, because the pupil or student could entrust them to him?
A concrete case is reported by a history teacher in 4e secondary.
He asked his students to find, at home or in their surroundings, objects from another era. The students had to describe them, then select one and write an essay on it answering this question: “How does your object reflect its time?” Our little kitten (a little tired of writing her name…) gives the answer in 10 seconds. It is true that mistakes can slip in… like in a student copy.
Jérôme Soucy teaches probability at university. Nice idea: he gives his students questions from a previous year’s exam with his own answers and those provided by ChatGPT.
His record? The cute feline is good at applying a wide variety of algorithms, but “has difficulty extracting the real question from the text. With the exception of one sub-question of an encyclopedic nature (defining two independent events in probabilities), he consistently fails”. Worse, if the dunce doesn’t answer a question he doesn’t understand, he answers it and his words are sometimes very far from the truth. Not too serious for a review, no doubt. But imagine, says Mr. Soucy, that the health and safety of human beings are at stake and that we did not have the wisdom necessary to frame some of the decisions that such systems can incite to take…
Let’s end on a more positive note.
David Di Pietro, instructional designer, sends me an academic paper which I read with interest.
Its authors argue that ChatGPT can help address three challenges of classroom learning: improving transfer; breaking the illusion of explanatory depth; teach students to critically evaluate explanations. How to proceed ? They precisely propose techniques to do this and to integrate all this into teaching.
See if that convinces you.