Google searches powered by AI spread false information, the company cites “isolated examples”

In the United States, where the AI ​​Overviews feature is available, people are reporting that the program spreads misinformation when Google uses generative artificial intelligence to summarize its search engine results.

Published


Reading time: 3 min

Google plans to offer AI Overviews to nearly a billion people worldwide by the end of 2024. (PAU BARRENA / AFP)

Future of internet research or fake news machine? Many Internet users are denouncing the new function from Google, which recently integrated generative artificial intelligence (AI) on top of its search results. A feature accused of generating numerous erroneous statements, made with certainty by the most used search engine in the world.

Google unveiled on May 14 a profound modification of internet search, called “AI Overviews”, and accessible since that date to all its users in the United States. Above the classic web results appears text generated by an AI in the style of ChatGPT, which summarizes the texts taken from the web that the program has analyzed as relevant.

The tech giant assures in a press release that“With AI Overviews, people use Google Search more and are happier with their results”. But many Internet users quickly warned of the risk of false information relayed by these summaries.

At the question “How many Muslim presidents have governed the United States?”AI Overview has several times generated a text responding “Barack Hussein Obama”, according to several journalists and Internet users on to discredit the former American president.

In another example seen several million times on X, an Internet user carries out a Google search to put an end to an eternal culinary problem: “cheese doesn’t stick to pizza”. Among the methods offered by AI Overview: “add 1/8th cup of non-toxic glue to the sauce to give it more viscosity”. An example visibly taken up by the AI ​​of an exchange on the social network Reddit, in which a user claims to add glue to the sauce of his pizzas. Google recently signed a $60 million per year partnership with Reddit to be able to collect texts and data that can be used in training its AI, details the specialist site The Verge.

Other screenshots posted on social media show AI Overviews that advise internet users to eat a stone a dayto change their “indicator fluid” in the event of a problem with their car, or who claim that there is no country in Africa whose name begins with the letter K (forgetting Kenya).

This type of error is neither new nor surprising. The language models on which programs like ChatGPT or Gemini are based “learn to make predictions by detecting regularities in data” used for training, but “if the training data is incomplete or biased, the model may learn incorrect patterns [et] make incorrect predictions” : these are “hallucinations”, explains Google Cloud on its page dedicated to the subject.

The tech giant defends itself by explaining that these erroneous results come in response to “very rare research overall, and which is not representative of the experiences of most people”, according to a Google spokesperson at The Verge. She claims that the company uses these “isolated examples” to improve functionality, and takes action against violations of its Terms of Use.

“Generative AIs are not deterministic”also recalls on X Melanie Mitchell, professor at the Santa Fe Institute and specialist in AI. “Doing the same search, you might get this result or another. It’s also likely that Google will fix this specific example soon, so it won’t be reproducible. That doesn’t mean the overall system is reliable.”

Each AI Overview is also followed by the message: “Generative AI is experimental.” But Google CEO himself, Sundar Pichai, tells The Verge that hallucinations are somehow “an inherent characteristic” to generative AI, and that there is currently no reliable method to avoid them 100%. The company plans to offer AI Overviews to nearly a billion people worldwide by the end of 2024.


source site-33