Artificial intelligence When ChatGPT loses the map for hours

ChatGPT was unblocked for several hours on Tuesday, answering users’ questions with nonsense sentences, a reminder that generative artificial intelligence (AI) models remain computer programs with no understanding of human language.

OpenAI, the Californian start-up which launched the interface at the end of 2022, indicated Wednesday morning on its site that ChatGPT was working “normally” again.

Later in the afternoon, it released a brief statement explaining the computer bug responsible for the problem, which occurred following an update.

“The money for the bit and the list is one of the strangers and the Internet where the currency and the person of the cost is one of the friends and the currency. Next time you look at the system, the exchange and the fact, don’t forget to give,” ChatGPT responded to a question from “scott.eskridge”, a user.

On the forum for developers who use OpenAI tools, he complained Tuesday that all his conversations with the language model were “rapidly turning into nonsense for the last three hours.”

With the success of ChatGPT, OpenAI has sparked massive enthusiasm for generative AI, which makes it possible to produce all kinds of content (texts, sounds, videos) — usually of astonishing quality — on a simple request in everyday language.

Tuesday afternoon — San Francisco time, where she is based — she announced “investigating reports of unexpected responses from ChatGPT.”

A few minutes later, the Silicon Valley star company assured that it had “identified the problem” and “was in the process of resolving it”.

“Haunted” GPT

Many users have uploaded screenshots showing erratic or incomprehensible responses from the model.

“It generates completely non-existent words, omits words and produces sequences of small keywords that are unintelligible to me, among other anomalies,” reported a user called “IYAnepo” on the same forum.

“You would think that I had specified such instructions, but that is not the case. I feel like my GPT is haunted […]. »

OpenAI explained Wednesday that a “user experience optimization” had “introduced a bug in the way the model processes language.”

“Language models generate responses by randomly sampling words, partly based on probabilities,” the company emphasized, before giving more technical details and concluding that the incident was “resolved” after “installing a patch”.

The incident reminds us that AI, even generative, has no awareness or understanding of what it “says”, contrary to the impression it can give during “conversations” with users.

When they debuted a year ago, interfaces like ChatGPT or its rivals from Google and Microsoft regularly tended to “hallucinate”, that is to say, to invent facts or even simulate emotions.

AI expert Gary Marcus hopes Tuesday’s incident will be seen as a “wake-up call.”

“These systems have never been stable. No one has ever been able to build security guarantees around these systems,” he wrote in his newsletter Tuesday.

“The need for completely different technologies that are less opaque, more interpretable, easier to maintain and debug — and therefore easier to implement — remains paramount,” he added.

To watch on video


source site-41