[Chronique] The hallucinations of an artificial intelligence

The OpenAI company has just released the new version of the natural language understanding and generation engine that powers ChatGPT. If the interface has been talked about a lot since its launch last fall, its new version is likely to go even further: it will make you hallucinate.

Literally.

Eventually, let’s hope OpenAI hires a marketer who finds more evocative names for their software creations than GPT-4 and ChatGPT. In the meantime, let’s just remember that GPT-4 is in a way the fourth generation of the language engine developed by OpenAI since its creation in 2015. ChatGPT is the web interface made available to programmers and computer scientists wishing to integrate this tool into their own software and applications.

There are many of them, and probably closer to you than you think. Microsoft already uses it in its Bing search engine. Salesforce has just added it to its Slack business chat software. The social network Snapchat has also recovered it to help its users produce publications without forcing themselves too much.

A romance in every answer

OpenAI customers who pay to access ChatGPT are entitled to its most recent version, which is based on GPT-4. Those who access it for free are entitled to an earlier version, called “GPT-3.5 turbo”. She provides less specific answers for a variety of reasons: because her reasoning is sloppy, because her sources of information are limited, and because her command of the language is not as advanced as that of her version. more recent.

If we rely solely on the technical limits of GPT-4, we understand that this new application of artificial intelligence (AI) is about ten times more efficient than its predecessor. This allows it to handle messages up to 25,000 words in length, rather than 3,000 words as was the case before.

And 3,000 words is about the length of an investigative file that a newspaper like The duty will publish on more than one page or on a few daily editions; 25,000 words is half a novel.

More muscle, but…

The new OpenAI AI is in every respect more robust than before. She masters fifteen languages ​​with a disarming naturalness. Including French. Moreover, it makes you think: if software created in the United States is as good in French, Spanish or other, there is no excuse other than the purest of laziness for companies like Air Canada or the Canadian National cannot achieve a similar level of respect for languages ​​other than English in their day-to-day communications.

She can compose musical scores as well as program entire websites. OpenAI also says its AI performs better on IQ tests or some college-level exams than the vast majority of flesh-and-blood students.

OpenAI adds in the wake that GPT-4 produces more factual answers to the questions it is asked in about 60% of the cases, compared to the previous version. About 50 experts have been hired to help reduce the risk of this AI being used for malicious or dishonest purposes by 80%.

It’s a nice progress. On paper. In reality, it is unclear to what extent the 20% to 40% of cases where the answers provided may be wrong or malicious represent a lower risk for its users. It is nevertheless a huge margin of error, which suggests that whoever uses it will sooner or later obtain from this AI, voluntarily or not, misleading or dangerous information.

What that person does with the information thus acquired will determine the seriousness of the situation. Recommending the wrong kind of chocolate for a Black Forest cake recipe won’t ruin a birthday party. Suggesting that people who are just recovering from hip surgery start running again as soon as possible is perhaps a little audacious…

Collective hallucinations

At OpenAI, we call false information provided by AI “hallucinations”. The AI ​​somehow imagines a world in which what it says is applicable even when it’s not true. The shift in meaning from this perspective is interesting: a little more and we will say that there must be parallel worlds where the factual errors produced by the GPT-4 language engine are, on the contrary, completely accurate.

As early critics of OpenAI’s technology pointed out, the extent of these hallucinations will depend on how companies and the public make use of tools like ChatGPT.

To Microsoft, Salesforce and Snap, we must add Google, Facebook, Amazon and other companies which are quick to respond to ChatGPT with their own language generation tool. The reason is simple: hundreds of companies are eager to use it to counter the lack of workers.

Naturally, if this happens, it will be necessary to redouble our vigilance on the veracity of the information produced by artificial intelligence. Otherwise, we risk suffering from collective hallucinations…

To see in video


source site-43