[Chronique] Robots make humans smarter

It seems understood that the evolution of artificial intelligence (AI) opposes that of human intelligence. The more sophisticated robots become, it is feared, the lazier, even dumber humans will become. However, this is completely false.

A group of researchers affiliated with the University of California, Santa Barbara published the results of their research in the form of a report whose title says it all: Superhuman artificial intelligence can improve human decision-making by driving innovation.

The researchers looked at changes in how go players behaved between 1950 and 2021. They paid particular attention to the evolution of their way of playing after the appearance of an AI capable of competing with the greatest players of this popular game of Chinese origin. They cite as an example AlphaGo, an AI developed in the laboratories of a division of Google which, at the end of the past decade, has established itself as one of the best Go players on the planet.

They analyzed no less than 58 billion counter-intuitive gestures made by the AI ​​in its own games pitting it against a human, then compared the reaction of their flesh-and-blood opponent with the gestures made in the seven decades precedents by other victorious players.

Their conclusion is unequivocal. “We examined the strategy of human players over time and found that innovative decisions (i.e. moves that have never been observed before) occurred more frequently and were associated with better decision-making. after the emergence of an AI with superhuman abilities. »

Their study then goes on to explain in greater detail that players who face highly creative opposition in turn become more creative, that they abandon more traditional playing patterns probably learned from other players before them, and that this led them to explore new ways of playing. This had the effect of forcing them to make better decisions, a challenge they apparently rose to with a regularity that made it seem like they became better players.

As they say in some countryside, you don’t sharpen a knife with butter…

board games

Researchers from the University of California, Santa Barbara published their study last December. It’s a coincidence, but it happened just days after ChatGPT went live, probably the most concrete embodiment of the transformative potential of new digital technologies.

The study obviously didn’t make as much noise as ChatGPT, which quickly prompted reactions from experts associated with all spheres of the information society in which most of us live: education, work, leisure…

Initially, this AI was mostly scary. She is able to understand and produce text in a language as natural as the one we use daily in our exchanges with other humans. It can mimic the style in which we communicate, whether it is formal, as required for academic research, or more familiar, in email or social media communication. .

She can also be tricked into going beyond her own limits. In the case of ChatGPT, the OpenAI company that created it wanted to avoid slippages, voluntary or not. Obviously, the invitation was issued to find ways to use it in spite of itself, in the wrong way. It only took a few hours to discover how it was possible to create, thanks to this tool and in a few seconds, malicious software and messages, or many other harmful applications.

Not to mention the ease with which ChatGPT can write good quality school work, which will do more than enough for a lazy student.

Obviously, the dark side of these technologies requires the creation of safeguards to avoid falling into the trap.

From teacher to philosopher

Like go players, professionals who fear the impact of these next-generation AIs on their daily lives will have to raise their game a notch and find ways, hitherto unheard of or out of the ordinary, to accomplish their tasks.

For example, in computer science, with tools like ChatGPT — but not only that one — it is possible for a programmer to automate the more manual part of his job: writing the code that brings applications and software to life. . Is this type of employment threatened? Not if you move from the stage of programmer to that of computer architect: rather than producing “bricks and mortar”, they can devote more time to imagining the final form of their construction.

Several teachers are also looking for tools that will allow them to detect whether the work submitted by their students has been written for them by an AI. Perhaps the solution is to approach teaching differently. For example, answering once and for all the question that nearly all students ask themselves at least once in their young life: what is the point of learning “that”?

The answer is quite simple: it makes us better.

To see in video


source site-44