AI and cognitive offloading | The duty

According to the most recent update on the topic, generative artificial intelligence (AI) is making us dumber. No doubt the first users of writing, 3300 BCE, were also the first to cheat in their oral presentations… by daring to take notes.

Generative AI has many flaws. Most are called “hallucinations.” These are errors in its production of text or images. It is also the imminent sign of what one researcher calls “cognitive load shedding”. This sounds a bit like an old refrain: AI will end up becoming a crutch, or worse, an intellectual prosthesis that will make humans incapable of doing their own research and, therefore, of developing their good judgment.

This is what a researcher from the University of Monterrey, Mexico, wrote in a study published in mid-January: “The widespread use of conversational AIs can harm the effectiveness of higher cognitive functions, such as problem solving. »

It’s going to be okay

If we’ve learned anything over the past three years, it’s how “doing your research” on the Internet and “developing good judgment” are two completely different activities. It’s going to be fine, they said…

We heard a similar song when the smartphone became a mainstream accessory between 2006 and 2009. People are going to lose their memories! The same tune was played somewhere around the turn of the millennium when the Internet and Y2K threatened to bring down entire civilization.

We were much better off when everything was sorted into cardboard folders and accordion binders.

The emergence of online video and the fear that younger generations will no longer read as many books as their parents cause the same fear. How will knowledge pass from one generation to the next if we don’t have books to contain it?

However, books were burned long before the birth of the Internet and mobile video filmed vertically (another slander!).

All rotten

Undoubtedly like their creators, generative AIs are not perfect. Ask ChatGPT for a relatively recent news item and it will tell you what it knows “according to its most recent update.” OpenAI, which created ChatGPT, periodically adds newer data to its knowledge bank, but it is never breaking news.

In English, the expression is “As of my last knowledge update”. Out of curiosity, wondering whether generative AI was used in scientific literature, journalists from the independent site 404 Media searched publicly available reports submitted by researchers to their peers using the Google Scholar search engine. through specialized publications.

They found 115 research reports containing this phrase. Do the exercise yourself and you might find more (177 have been identified since 2020).

“According to my most recent update in September 2021, there is no generally accepted scientific correlation between quantum entanglement and longitudinal scalar waves,” we can read in the first result found, a paper published in a specialized journal and title Quantum entanglement. Examination of its nature and implications.

For researchers, seeing their studies published in a scientific journal is a form of validation. It is an effective way to establish your reputation as an expert in a field of scientific study.

Producing studies quickly thanks to a generative AI which does the research work for them makes it possible to accelerate this access to scientific fame, seems to think more than one slightly lazy academic. Indeed, all this scientific research is apparently not systematically read before being published. Many publishers will distribute anything resembling a scientific study, as long as they get paid to do it.

Review Nature reported in December that more than 10,000 false studies had to be unpublished in 2023 alone by scientific publishers after being reported by groups whose sole task is to detect such cases of (at a minimum) intellectual fraud.

This is a record number. These same groups noted 4,000 false searches in 2022, adds Natureand the average was closer to 1000 false jobs annually since 2013.

We don’t dare ask ChatGPT if the link between the emergence of generative AI and this sudden increase in false scientific research is a causal link, or more simply a correlation. In any case, chance does not seem to be part of the equation…

One thing is certain, if the putting online of artificial intelligence applications from OpenAI, Google and Meta causes what a university researcher less lazy than others calls cognitive load shedding, we cannot say that it ‘invented.

Maybe it just amplifies certain human behaviors. AI also did not invent fake research reports or doctored photos and videos. Someone has to enter commands on their computer keyboard.

Is the fault rather this damn keyboard?

To watch on video


source site-45