(San Francisco) ChatGPT, the wildly popular generative artificial intelligence (AI) interface that brought the technology to prominence, went unblocked for several hours Tuesday, answering users’ questions with nonsense sentences, a reminder that these systems are still in their infancy.
OpenAI, the start-up that launched the program at the end of 2022, indicated Wednesday morning on its site that ChatGPT was working “normally” again.
Tuesday afternoon –– San Francisco time, where she is based – she announced “investigating reports of unexpected responses from ChatGPT”. A few minutes later, the star Silicon Valley company assured that it had “identified the problem” and “was in the process of resolving it”.
Many users have uploaded screenshots showing erratic or incomprehensible responses from the generative AI model. This cutting-edge technology makes it possible to produce all kinds of content (texts, sounds, videos), usually of astonishing quality, upon simple request in everyday language.
On the forum for developers who use OpenAI tools, a user called “IYAnepo” noted the “strange” behavior of ChatGPT.
“It generates completely nonexistent words, omits words, and produces sequences of small keywords that are unintelligible to me, among other anomalies,” he said. “You would think that I had specified such instructions, but that is not the case. I feel like my GPT is haunted […] “.
Another user, “scott.eskridge”, complained on the same forum that all his conversations with the language model have been “rapidly turning into nonsense for the last three hours.”
He copied an excerpt from one of the responses from the interface: “Money for the bit and the list is one of the strangers and the internet where the currency and the person of the cost is one of the friends and the currency. Next time you look at the system, the exchange and the fact, remember to give. »
OpenAI did not provide further details on the nature of the incident, which reminds us that AI, even generative, has no awareness or understanding of what it “says”.
AI specialist Gary Marcus hopes the incident will be seen as a “wake-up call”.
“These systems have never been stable. No one has ever been able to develop security guarantees around these systems,” he wrote in his newsletter Tuesday. “The need for entirely different technologies that are less opaque, more interpretable, easier to maintain and debug – and therefore easier to implement – remains paramount,” he added.