[Opinion] The assessment of learning outcomes, a Pandora’s box?

The disclosure of the results of the September 26 exam of the Order of Nurses of Quebec (OIIQ) has caused a lot of reaction since the announcement of the results. There would be a lot to say or write about this situation: the choice of an exam with multiple choice questions (MCQ) associated with written clinical situations; the choice not to allow requests for revision of the mark, arguing that “the answers appear in a grid, which is corrected electronically”, consequently “no interpretation of the answers is carried out”; the choice to set the pass rate for this professional examination at 55%; etc

What happens with the professional examination of the OIIQ is not singular. Similar questions could be raised among the 46 professional orders in Quebec that sanction the right to practice by an examination. Other orders rely on higher education institutions to validate the learning outcomes of future professionals. This is the case of the Order of Psychologists of Quebec (OPQ). Does this mean that the programs of 2e cycle or 3e cycle that “give access to the permit issued” by these professional orders are free from problems related to the evaluation of student learning? Far from it.

1er November 2022, Louise Leduc, from The Press, published an article titled “Imbalance in Psychology”. Essentially, she depicted the imbalance between the number of students enrolled in the bachelor’s degree in psychology and the much smaller number of students enrolled in the doctorate (diploma required by the OPQ to obtain a license to practice). In reaction, Hélène David, former professor of psychology at the University of Montreal, is quoted: “We need patient averages [pour être admis au doctorat]. If you don’t have A’s everywhere, you’re crying for your life. » How to have A’s everywhere? Some may say that all you have to do is commit fully, study very hard and perform! […] It’s a defensible hypothesis, but that’s counting without the way of evaluating learning outcomes.

In a psychology program in Quebec, the person in charge of the “Psychology of learning” course offers two evaluation measures: an intra-exam (50%), at mid-term, and a final exam (50%), at the end of the session. After the correction, some students learned that they had had very good grades and they were very happy about it, but a surprise awaited them. Shortly after, the teacher contacted them by email: “A penalty was applied (12 points) on the final exam, because the average was too high. Have a good session. » Twelve points less! For some, it’s the equivalent of going from an A+ to a B+; for others from a B to a C. The consequences are significant for the students: without an “average patient”, the probability of accessing the doctoral program decreases as much, as does the morale of the students affected by this type of of measurement.

Some of the challenges of assessing learning outcomes

What is the evaluation for? For many students, the target is getting a grade; for some teachers, the target is to give a grade. But the assessment is not the rating. Despite everything, this grade is one of the keys to admission to a program, obtaining a scholarship, access to an internship, etc. But the evaluation should above all inform the student about his learning (quality and quantity), and this, from a formative perspective. Otherwise, there is confusion between the end and the means. The end should not be to “pass the exam”, but to demonstrate that there has been learning. The way to demonstrate this is through evaluation.

In an article titled Stop Telling Students to Study for Exams (2018), David Jaffe argues that the common practice of inviting students to study for the exam is counterproductive. Why ? Because it goes against learning. According to him and many others, students should be encouraged to study to learn, to understand. Failing this, study, a pivotal moment in the learning process, is instrumentalised. […]

In the human sciences, particularly in psychology, it has also been shown that certain stable human traits are distributed in a “normal” way in the population. It is postulated that the population presents a random and homogeneous distribution of individuals, which is not the case for a group of university students. For this group, the application of the normal curve presents a significant bias. For example, the average intelligence of this subgroup of individuals, like its distribution, is not “normal”. Moreover, performance on an exam is not a (stable) human trait. The performance of the students will depend on several factors which have already been discussed, but also on the quality of the exam, such as its correction.

Evaluate to promote learning and personal development

Several writings suggest implementing evaluation measures that support learning and promote the development of skills in students. From these proposals emerged the concepts of formative assessment (to be distinguished from summative or certifying assessment) and authentic assessment. The first aims above all to promote student learning by developing their skills and allowing them to assess their progress. The second proposes to evaluate the students through a context or a situation which is close to reality, professional or social. For example, internships usually offer many examples of authentic professional contexts and situations. The assessment then focuses on the students’ ability to “act competently” when faced with a given task.

How far can professional orders and university programs go in these two directions? Quite a long way, if we rely on the innovations implemented in certain programs and in certain professional orders. In these cases, the accompaniment and feedback offered to students occupy a preponderant place. These educational tools then not only support learning, but also student motivation. In doing so, it is the learner who is the central point of the learning path and the evaluation of prior learning. The personal and professional development, even identity, of the students is added to the learning objectives of the program. The student’s quest for meaning is then further nurtured and, in turn, fuels his commitment to the course and the activities offered.

Of course, professional orders, like university programs, are responsible for the quality of their evaluation with other stakeholders. It is important to remember that students themselves represent one of these stakeholders, possibly the one with the most to gain or lose. These people are in search of legitimacy, professional and academic. As Bibeau and Meilleur (2022) write: “The exercise of quantifying a reality not only describes it, it also contributes to defining it. »

To see in video


source site-39