(Mountain View) Google on Tuesday unveiled its new Pixel smartphones powered by generative artificial intelligence (AI), continuing its ambitions to create ever more personal AI assistants, the current holy grail of Silicon Valley.
In Mountain View, at the Californian group’s headquarters not far from San Francisco, the company’s directors praised the merits of the Pixel 9 and connected accessories.
But the star of the show was undoubtedly generative AI. Gemini, Google’s main AI model, is now embedded at the heart of devices, to communicate directly with the machine.
“With our demos, you can see how Gemini will help you, on Android, by combining real-world insights with your information and the apps you use every day,” said Jenny Blackburn, a vice president at the company.
The Californian group has a major advantage in the race for AI agents: user data, which it can collect through its ultra-widespread services, such as its search engine, its Android mobile operating system and applications like YouTube, Google Maps and Gmail.
Armed with this information, an AI assistant has the potential to become an omniscient secretary, accessible at all hours and capable of performing numerous tasks on behalf of the user.
“With your permission”
Despite its online empire, Google has failed to break into smartphones, a market dominated by its neighbor Apple, South Korea’s Samsung and China’s Xiaomi.
But Apple only got into generative AI very recently, unveiling “Apple Intelligence” in June, a year and a half after OpenAI began its wild ride with ChatGPT.
Google, leading the pack with Microsoft, hopes to give new impetus to its smartphones and strengthen the attractiveness of Android.
The Pixel 9 models feature a custom-designed chip for Gemini that can understand and produce data in the form of text, sound, images, and more.
Gemini “can provide you with unparalleled personalized support, accessing, with your permission, relevant information in your email, calendar and more,” Shenaz Zack, a director of the Pixel line, said at a press conference.
The AI assistant makes appointments at the hairdresser or garage, writes messages, finds screenshots and extracts information from them, consults an invitation to an event and checks the user’s availability, among other things.
The model “handles these kinds of complex queries without sending your personal data to a third-party AI vendor that you may not know or trust,” the director assured, a dig at Apple, which has teamed up with OpenAI for similar functionality.
“Some new tools seem incredibly useful but could be seen as invasive,” said Grace Harmon of Emarketer. “Users will decide whether they find them beneficial or too scary.”
Microsoft came under fire in May after unveiling “AI PC” for a feature that allows the computer to remember everything the user has seen on the screen.
Record conversations
“For an AI assistant to be truly useful, it needs to be able to converse,” said Jenny Blackburn, who introduced “Gemini Live,” a voice chat application with the assistant, available in English on all Android smartphones.
It should allow for the exchange of ideas or practice before a job interview, for example.
This is the ultimate goal of tech companies: ultra-personalized AI agents capable of reasoning like humans.
Meta (Facebook, Instagram) has also made a lot of progress in these new digital “friends”.
Its connected Ray-Bans could give a new dimension to Meta AI, the assistant who can thus “see” and “hear” the environment of the user wearing the glasses.
Google’s new handsets will be able to record and summarize phone conversations directly on the device.
Beyond generative AI, AI in general continues to make new products ever more efficient, including their cameras, photo editing tools and even… the phone.
The “Clear Calling” feature will “improve the audio quality of calls,” promises Shenaz Zack.