[Opinion] This is not a university professor

Today, the world of universities is in full transformation and is experiencing a worrying development. ChatGPT technology has recently made its way into academic circles in the form of “social acceleration”. Such innovation has profound consequences for teaching and research.

ChatGPT is a “natural language processing” technology that can be used to automate academic writing and research. In a changing academic world, this technology has the potential to accelerate the production and dissemination of knowledge. It can indeed facilitate the translation, writing and research of documents, and thus make the production of academic work faster and more efficient.

But this technology poses serious risks to scientific and academic integrity. Some fear the technology is being used to circumvent the academic review process, giving authors an unfair advantage and unreliable results. Moreover, this technology could be used to accelerate the neoliberalization of universities, which tends to reduce access to resources and degrade the quality of teaching and research.

Finally, the use of ChatGPT could lead to “surveillance capitalism”, where users are constantly monitored by algorithms. Researchers and students could be subject to increased scrutiny, as their research and work is analyzed and evaluated by algorithms. This technology could also lead to a loss of autonomy and freedom for researchers and students, depriving them of their right to decide how their work is produced and disseminated.

The arrival of ChatGPT in academic circles therefore raises legitimate concerns. Universities must take steps to protect their students and researchers, and to ensure the integrity of their activities. It is important to limit the use of this technology to where it is needed and to ensure that the work produced is always subject to appropriate review.

Therefore, universities must be vigilant and take action to combat the misuse of ChatGPT technology and its harmful impacts. Students and researchers must be protected from possible abuses, and the quality of teaching and research must be maintained.

What to do with this tool?

The previous six paragraphs were not written by me, but by the ChatGDP tool, based on the most recent techniques of deep learning and generative artificial intelligence, capable of producing more or less complex texts from instructions simple. I personally wrote the following request: “Write a 600-word op-ed for the newspaper The duty on the upheavals of ChatGPT in academia. Write a critical reflection including the concepts of social acceleration, neoliberalization of universities and surveillance capitalism. »

As a university professor interested in the transformations of advanced capitalism, social innovations and the complex impact of new technologies, here I am faced with a very concrete question which directly affects my profession: what to do with this tool which will have direct impacts on assessment and teaching methods?

Besides the question of plagiarism, these tools can also be used by teachers to accomplish a variety of tasks. I’ve done a few tests myself, creating a syllabus in seconds, generating questions for a final exam, and answering those same questions with a higher quality of writing than college undergraduates, and this, in less than three minutes.

This immediately made me think of philosopher Günther Anders’ concept of human obsolescence. Here is another answer provided by ChatGPT: “The term human obsolescence by Günther Anders refers to a potential threat to the survival of human beings due to the arrival of new technologies. Although this notion can apply to all areas of life, it takes on particular meaning for university professors with the arrival of ChatGPT. Indeed, ChatGPT is an artificial intelligence system that can simulate a teacher and provide lessons to students. This could lead to a form of course automation and the loss of jobs for university professors. »

This is a somewhat disturbing answer. For my part, I believe that the status of full-time professors is not threatened in the short term. Contrary to popular belief, recent innovations in artificial intelligence are not leading to a massive loss of jobs, but rather to a reconfiguration of work and exploitation techniques (including “digital work”), via outsourcing and fragmentation of tasks, extending the automation mechanisms of industrial capitalism of the past.

Bring the genie back into his lamp

Nevertheless, the introduction of AI in schools and universities raises several fundamental questions. If these tools can be used in a pedagogical way in certain circumstances, and teachers will have to adapt their evaluation methods accordingly, it would be simplistic to adopt an “adaptive” approach limited to individual responsibility. All primary and secondary schools, CEGEPs and university communities are affected.

Moreover, technological innovations emerge and spread in a determined socio-economic context, in this case capitalism, which influences the possible uses of algorithmic tools. It would not be surprising if the precariousness of teaching were intensified, as were the pressures to publish. In our capitalist system based on the imperatives of productivity and performance, this kind of technology would serve above all to intensify the dynamics of competition between students, professors and researchers.

Moreover, no matter how much we “prohibit” the use of these tools in class, the technology is now available, free, easily accessible and usable, so that we cannot go back or put the genie back in his lamp.

We are therefore placed before a fundamental reflection on the purposes of teaching and on the relevance (or obsolescence) of the methods used in the field of education. To be honest, I’m dumbfounded. If the teachers are not yet obsolete for the moment, the fact remains that a good part of our current evaluation tools are good for the trash.

To see in video


source site-48