Artificial intelligence | How ChatGPT caused an ‘arms race’

One day last November, OpenAI employees received an unexpected assignment: to release a chatbotor chatbot, quickly.


This chatbot, an executive announced, will be called “Chat with GPT-3.5” and will be made available to the public free of charge. In two weeks.

The announcement puzzled some OpenAI employees. All year, the San Francisco artificial intelligence (AI) company had been working on releasing GPT-4, a new model of AI that was surprisingly good at writing term papers, solving complex coding problems, and more. . After months of testing and fine-tuning, GPT-4 was almost ready. According to three people familiar with the inner workings of OpenAI, the model was set to launch in early 2023, along with a few chatbots for users to try it out for themselves.

But senior OpenAI executives had changed their minds. According to sources with contacts at OpenAI, some executives feared that competitors would get ahead of them by launching their own chatbots ahead of GPT-4. They felt that by quickly releasing a version using an earlier model, they could gather feedback to improve the new model.

So they decided to dust off and update a never-before-seen chatbot that used an improved version of GPT-3, the company’s previous language model, released in 2020.

Thirteen days later, ChatGPT was born.

In the months since its debut, ChatGPT has become a worldwide phenomenon. Millions of people have used it to write poems, create apps, and conduct impromptu therapy sessions.

It has been adopted (with mixed results) by news publishers, marketing companies and business leaders. And it’s sparked a frenzy of investors trying to cash in on the next wave of the AI ​​boom.

It also sparked controversy. Users have complained that ChatGPT tends to give biased or incorrect answers. Some AI researchers have accused OpenAI of recklessness. And schools across the country, including those in New York, have banned ChatGPT in an attempt to stem a deluge of AI-generated homework.

Yet little has been said about the origins of ChatGPT, or the strategy behind it. Within the company, ChatGPT came as a complete surprise — an overnight sensation whose success has created both opportunity and headaches, according to several current and former OpenAI employees, who have requested l anonymity because they were not authorized to speak publicly.

An OpenAI spokesperson, Niko Felix, declined to comment on this text.

The effects of such growth

Before the launch of ChatGPT, some OpenAI employees were skeptical about the success of the project. A chatbot launched by Meta a few months earlier, BlenderBot, flopped, and another Meta project, Galactica, was pulled after just three days. Some employees, desensitized by daily exposure to cutting-edge AI systems, thought a chatbot built on a two-year-old AI model might seem boring.

But two months after its debut, ChatGPT has more than 30 million users and receives around 5 million hits a day, according to two people familiar with those numbers. From collective memory, this makes it one of the fastest growing software. Instagram, for example, took nearly a year to get its first 10 million users.

This growth has brought some challenges. ChatGPT experienced frequent crashes due to a lack of computing power, and users found ways to circumvent some of the bot’s security features. The hype around ChatGPT has also annoyed some tech rivals, who have pointed out that the underlying technology isn’t, strictly speaking, all that new.


PHOTO BRENDAN MCDERMID, REUTERS ARCHIVES

OpenAI CEO Sam Altman

ChatGPT is also, for now, a money pit. There’s no advertising, and the average conversation costs the company “less than 10 US cents” in processing power, according to a tweet from OpenAI CEO Sam Altman, which likely amounts to millions of dollars a day. week. To offset these costs, the company announced this week that it will start selling a monthly subscription for US$20, called ChatGPT Plus.

Despite its limitations, the success of ChatGPT propelled OpenAI among the major players in Silicon Valley. The company recently struck a $10 billion deal with Microsoft, which plans to integrate the startup’s technology into its Bing search engine and other products. Google issued a “code red” in response to ChatGPT and accelerated the rollout of several of its own AI products in an attempt to catch up.

Praises and reviews

OpenAI is an atypical company, by Silicon Valley standards. Launched in 2015 as a nonprofit research lab by a group of tech leaders including Altman, Peter Thiel, Reid Hoffman and Elon Musk, it created a for-profit subsidiary in 2019 and struck a $1 billion deal. with Microsoft. It has since grown to about 375 employees, according to Altman — not counting the contractors it pays to train and test its AI models in regions like Eastern Europe and Latin America.

From the outset, OpenAI presented itself as a mission-driven organization that wants to ensure that advanced AI will be safe and in line with human values. But in recent years, the company has embraced a more competitive spirit, which some critics say has come at the expense of its original goals.

These concerns gained momentum last summer, when OpenAI launched its DALL-E 2 image generation software, which turns written messages into digital works of art. The app has been a hit with consumers, but it has raised tough questions about how such powerful tools could be used for harmful purposes. If creating hyper-realistic images was as easy as typing a few words, critics asked if porn lovers and propagandists wouldn’t have a blast with the technology.

To allay these fears, OpenAI has equipped DALL-E 2 with numerous protections and has blocked certain words and expressions, such as those related to violence or nudity. She also taught the bot to neutralize certain biases in its training data, such as ensuring that when a user asked for a photo of a CEO, the results included images of corporate executives.

These interventions avoided problems, but they were perceived by some OpenAI executives as cumbersome and overbearing, according to three insiders. One of them was Altman, who said he believes chatbots should be personalized to the tastes of the people using them — a user might opt ​​for a stricter, more family-friendly model, while another might choose a looser, bolder version.

OpenAI has taken a less restrictive approach with ChatGPT, giving the bot more freedom to express itself on sensitive topics like politics, gender, and religion. Despite this, some right-wing conservatives have accused the company of going too far. « ChatGPT Goes Woke “, headlined an article by National Review last month, which claimed ChatGPT gave leftist answers to questions on topics like drag queens and the 2020 US presidential election. (Democrats have also complained about ChatGPT — most often because they think the IA should be more strictly regulated.)

As regulators squirm, Altman tries to keep ChatGPT above the fray. He traveled to Washington last week to meet lawmakers, explain the tool’s strengths and weaknesses and dispel misconceptions about how it works.

Back in Silicon Valley, he has to face a new frenzy. Besides the $10 billion deal with Microsoft, Altman has met with executives from Apple and Google in recent weeks, according to two people with knowledge of the meetings. OpenAI also struck a deal with BuzzFeed to use its technology to create AI-generated lists and quizzes (the announcement more than doubled BuzzFeed’s stock price).

The race intensifies. According to Reuters, Chinese tech giant Baidu is preparing to launch a chatbot similar to ChatGPT in March. Anthropic, an AI company created by former OpenAI employees, is said to be in talks to obtain 300 million US dollars in new funds. And Google is developing more than a dozen AI tools.

And then there’s the GPT-4, which is still slated for release this year. Its capabilities could be much greater than ChatGPT. But maybe now that we’ve gotten used to the presence of a powerful AI tool, the next tool won’t seem so overwhelming.

This article was originally published in The New York Times.


source site-55

Latest