[Opinion] Between research in education and house of cards

A few days ago, we were reading in the pages of the To have to a column that claimed to somehow report on the advancement of knowledge on various topics in education, in particular mindfulness meditation, universal design of learning and artificial intelligence. The argument put forward to take a critical look at these approaches is based on the “solidity” that the author seems to recognize more in certain methodologies which, according to him, would make it possible to “measure with the facts”. In our eyes, this close look at what it is to show or demonstrate in the sciences of education leads to building an argument whose structure is as fragile as that of a house of cards…

So-called “convincing” data, generated by a very specific form of research, is highly prized by some, and sometimes even considered the only guarantee of “effectiveness” or relevance. They come from medicine, a field whose functioning is very different from that of education. Such data may be appropriate when strict treatment protocols can be applied and when effects can be accurately measured using uniform tools.

In education, however, so-called “proven” data has limitations, in particular because of the variability of interventions and teaching practices, then because of the significant influence of the context, both school and family. In this sense, they should not serve as the only indicator of the relevance of the approaches.

Obtaining evidence stems from the ability to isolate variables. It is a question of checking, for example, that a given result is the fruit of no other factor than the intervention that one seeks to evaluate. However, such hope is difficult to fulfill in education, since a result or a performance can be influenced by the background of the students, the family support they receive, the interactions with the other students in the class, the circumstances of the moment, the foundations of a school discipline, the educational values ​​promoted, etc. To limit the weight of the variations, we try to obtain large samples.

Multiply the approaches

Then comes another issue. It is still necessary to be able to properly measure the concept that one claims to measure and to have adequately defined it upstream. When evaluating scientific articles, it happens to come across some that claim to study processes (for example resilience), while using measures that rather reflect personal qualities. Admittedly, there are guarantees of rigor for the proponents of conclusive data: large samples, statistically significant results, impressive effect sizes. However, at the base, we did not evaluate the concept that we claimed to be evaluating, which poses an important fundamental problem: the validity of the construct.

In addition, this type of study can sometimes allow us to link variables and document effects, but they are often not very evocative about the reasons that explain them, the way of implementing them or the contextual conditions that can favor them. This is one of the reasons why qualitative research approaches deserve to be considered and more valued, in compliance with their criteria of scientific rigour.

Meta-analyses provide one type of evidence. Although they can provide access to a synthetic overview of a greater or lesser variety of studies, it should be noted that they generally exclude qualitative studies. Then, we try to compare studies that do not necessarily define the concepts in the same way, nor measure them on the basis of the same instruments. When it comes to specific interventions or practices, these are generally poorly described. It is therefore not known according to which exact procedures they were carried out nor the contexts in which they took place.

Although they are useful and can save time for the reader who cannot read each study that composes it, it is advisable to exploit the meta-analyses to have a partial portrait of the studies on a given subject, while considering their limits. They should not be used on their own to discredit or praise practices that are often poorly defined.

Appreciate the nuances

Furthermore, it is important to question the concept of efficiency. For what purpose(s)? What do we want to develop in the students? Do we want to make them exam performers? Future committed citizens capable of critical thinking? Or what else… These questions are fundamental because it is illusory to think that a single approach can make it possible to achieve all the ends at the same time.

When it comes to learning, does an exam mark alone capture all of the learning achieved by the students? Can everything be quantified? How should we measure the effectiveness of an approach to the development of students’ social and emotional skills? Of their creativity? These are some of the questions to which too little thought is given before judging effectiveness in education.

Let’s conclude in three words: nuance, nuance, nuance! In education, it is important to stop the hierarchization of research approaches and to distribute injunctions, especially on the basis of fragmentary scientific writings. Rather than pretending to report on THE research, as if it were a monolithic block and there was absolute consensus, let’s remain transparent, critical and humble in our interpretations. Just because studies recommend that more research be conducted on a given topic, especially in the case of innovative approaches, does not mean that such approaches are irrelevant or useless for the school environment. This is a misleading interpretation.

To see in video


source site-46