(Seattle) Sam Altman, the boss of OpenAI (ChatGPT), on Tuesday defended the generative artificial intelligence (AI) designed by his company as “safe enough”, and encouraged the sector to advance quickly in this technology, despite the controversies current events around his start-up.
GPT-4, OpenAI’s AI model, “is far from perfect. We still have work to do, but it’s generally considered strong enough and secure enough for a lot of uses,” he said in Seattle on stage at “Build,” Microsoft’s annual conference, its main investor.
After the flamboyant announcements from OpenAI and Google last week, and well before those highly anticipated from Apple in June, it was Microsoft’s turn this week to showcase its innovations in technology considered a new industrial revolution.
But the accumulation of controversies has somewhat stolen the spotlight from the IT giant, self-proclaimed “industry leader in AI”.
Sam Altman has just apologized to actress Scarlett Johansson, who accuses him of having copied her voice for the brand new voice mode of ChatGPT.
It also suspended Sky, the AI-generated voice in question, used during a demonstration of the capabilities of its new multimodal model, GPT-4o.
In a statement Monday, the actress said she refused the leader the right to use her voice, and said she was “shocked” by the strong resemblance.
“I was very surprised by how much I like the new voice mode,” said Sam Altman, all smiles, on stage Tuesday.
“Snazzy products”
He encouraged the thousands of developers following the conference to “take advantage” of this “special period”, “the most exciting since the advent of mobile, or even the internet”.
“Now is not the time to put off your ideas or wait for the next developments,” he insisted.
He also praised the performance of his new model: “speed and cost are very important. With GPT-4o, we managed to halve the price while doubling the speed.”
And in the future, “models will become even more intelligent,” he promised, assuring that “that’s what matters most.”
Many observers and elected officials are concerned about the ultra-rapid speed at which tech giants are deploying ever more sophisticated and humanized AI tools.
The groups, for their part, congratulate themselves on their exponential progress, and assure that “this is only the beginning”.
Apart from the dissonant voices, like Jan Leike, former head of the OpenAI team responsible for supervising the possible long-term dangers of a “super AI”, with cognitive capacities similar to those of humans.
He resigned last week, telling X that the company was prioritizing “flashy new products” over safety.
His co-director, Ilya Sutskever, a co-founder of the company, also left. OpenAI disbanded the team and distributed its members to other groups.
“Like medicine”
On Tuesday, Sam Altman assured that he took seriously the security issues of AI models, which are increasingly used on a large scale despite the opacity of their operation.
“It’s like when you take a medicine, you want to be sure it’s safe,” he noted.
“I would like to thank the OpenAI team for their partnership and responsible approach to innovation, which allows our industry to progress,” said Satya Nadella, the boss of Microsoft, during his opening speech.
The Windows maker on Monday unveiled “Copilot” PCs, new computers where its Copilot AI interface and other tools are integrated by default to help the user with everything from emails to professional meetings and personal projects.
Like “Recall”, which allows the user to go back in time on their PC, see all the windows they have opened, or ask Copilot to find a document for them.
The tool caught AI specialist Gary Marcus, who sees it as an “Orwellian function” and above all a “super target” for cyberattacks.
On several senior officials.