[Opinion] How to coexist with conversational agents?

In May 2021, tech giant Google announced the debut of LaMDA, a revolutionary artificial intelligence (AI)-based chatbot capable of fueling discussion on a multitude of topics. Its effectiveness goes so far as to push Blake Lemoine, a computer engineer involved in the project, to categorize it as a conscious being. Barely a year and a half later, it’s the turn of ChatGPT, another conversational agent that surprises the whole world with its knowledge, its logical reasoning, and even its ability to code.

It is now clear that this type of AI will be an integral part of our daily lives in the future. Moreover, companies such as Microsoft and Google are working simultaneously to include it in their respective search engines. We are witnessing a race towards AI that will reshape the digital world of tomorrow. Although these systems provide some benefit, they also constitute a threat to which society must adapt in the years to come.

Legitimacy of a text

From its appearance, ChatGPT shows the qualities of a writer; he can not only write dissertations, but also sonnets expressing a precise idea. All this happens in less than a minute. It then becomes clear that this tool becomes the best companion of schoolchildren and bloggers who use it to carry out their work.

However, how to distinguish a legitimate work from a work generated by ChatGPT? The detection tools lack precision and are misled by small modifications to the text. This problem becomes all the more urgent for teachers of French, English, philosophy, history, etc. They struggle to ensure that what they read really belongs to the student who submitted the work. Intellectual fraud then becomes an obstruction of justice in education, because the chatbot gives a shortcut to a high score. If left unchecked, overreliance on AI can reduce students’ ability to write text on their own.

For now, there is no better solution than the improvement of detection systems. These may even be the only defenses against intellectual fraud. In the meantime, schools may consider increasing the weighting of in-person essays.

Job Loss

ChatGPT’s extraordinary capabilities bode ill for the job market. Indeed, the application proves that AI has the perfect capacity to one day replace humans in many of today’s jobs, such as secretaries, teachers, programmers, translators, journalists, etc. The real question is whether this transition will be rapid or gradual.

Today’s situation strangely resembles the eve of the industrial revolution, when the first steam engines appeared. At the time, a large part of the workers were fired. Similarly, chatbots like ChatGPT show higher productivity and are less expensive to maintain. Should we expect the same situation, that is to say the loss of jobs, to occur? If this is the case, there is a strong risk of stretching social disparities over the next few years, as owners will continue to get richer while basic jobs will no longer exist.

Just like during the industrial revolution, society will recover from these changes by creating other economic sectors. The challenge is to maintain stability during the transition period. The task of preparing is mainly up to the government: it may be necessary to consider raising taxes on technology companies to better subsidize social safety nets.

Influence on public opinion

With the appearance of advanced conversational agents, the population will increasingly depend on these tools to acquire information. In the near future, we can conceive that everyone will have an assistant like ChatGPT directly in their pockets. When looking for a quick answer, just ask him what you’re looking for.

However, is the answer we receive perfectly objective? Unlike a search engine like Google, where a search gives on several Internet pages (one can even evaluate the reliability of the sites), ChatGPT only gives an answer without even mentioning a source. If chatbots do this, tech companies can easily manipulate popular opinion on a topic by teaching their algorithms to give biased answers or spread rumours. Therefore, the rise of these tools can only strengthen the influence of these companies.

This threat has already proven its disruptive power time and time again. We remember in particular the role of Facebook in the 2016 American elections. To prevent such situations, it is imperative for each individual to ensure a diversity of their sources of information. As for governments, they must work to regulate the quality of information given by chatbots, all with the aim of reducing the spread of misinformation.

Decrease in long-term logical capacity

Until now, modern conversational agents are the first algorithms to be able to reason logically on various subjects. ChatGPT is often used to help humans with thinking: it’s an external brain.

Again, this is reminiscent of the early years after the Internet appeared (only a decade ago), an external memory. Since that time, humans visibly put less effort into retaining information or knowledge learned at school. Now, a simple Google search reveals everything.

Could the same thing happen again today? By dint of using AI, humans will spend less and less time thinking for themselves. If so, it will make it even easier for tech companies to control public opinion. To deal with this threat, the best way is to educate people not to rely too much on the machine and only use it for consultation.

In short, as AI takes more prominence, society needs to keep pace with change and adopt certain measures in order to maintain its health. Some require individual effort, while others require collective effort. If we manage to use chatbots properly, we can enjoy its benefits for years to come.

To see in video


source site-43