Published
Reading time: 4 min
Carbon copies of recognized media, these sites disseminating false information are multiplying and attracting advertisers.
You do research on a current event, click on the address of a site, read the article then close your computer. Without knowing it, you may have just read a text written by artificial intelligence (AI). Sites generated via AI have multiplied in recent months. NewsGuard, a start-up specializing in monitoring misinformation, is sounding the alarm. In one of its reports, it had identified 49 addresses of such sites in May 2023 ; today, it lists 767, including 15 in French.
These sites, behind which there is little or no human presence, publish thousands of articles every day. Without real human control, they tend to spread false information, explains NewsGuard. In November 2023, the information website Global Village Space announced, for example, that Benjamin Netanyahu had caused the suicide of his psychiatrist. Artificial intelligence was unable to differentiate this joke, published in a satirical newspaper in 2010, from real information, and turned it into content resembling an article, as revealed by NewsGuard.
False information and seductive headlines to generate clicks
Humans are indeed at the origin of these sites, but their identity remains unknown. “Some have understood the economic interest, advertising generates financial revenue”, explains Chine Labbé, editorial director of NewsGuard. Articles are formatted in such a way that they are displayed in the first results on search engines. “This is what attracts advertisers; they want their advertising to be seen so they target the best referenced sites to distribute them”, notes the journalist.
To maximize their visibility, they publish shocking information. Their specialty : announce the death of living people. Thus, in January 2024, it is the journalist Deborah Vankin, from the American newspaper LA Times, who paid the price. For what ? His name had been typed numerous times on search engines after the publication of one of his columns. “When AIs notice that a name is particularly searched for, it is very common for them to produce obituaries, this is the type of article that people click on” advances Chine Labbé, the click being the number one objective.
“Others post polarizing content in order to cause chaos.”
Chine Labbé, editorial director of NewsGuardat franceinfo
After detecting conflicting opinions on the web, such as the Israeli-Palestinian conflict, the AI produces articles on these topics to attract readers. The goal remains the same: to create reactions and generate clicks. But then how to recognize this content ? Certain techniques make it easy to spot them. First, they are full of repetition and contradictory elements. “If you ask an AI to write a recipe for a cow’s egg omelet, it can definitely do it,” notes Amélie Cordier, engineer expert in artificial intelligence interviewed by franceinfo. Generally, these texts contain very few citations and sources. Finally, these sites are ultra-productive. This is the case of Interstars, one of the 15 French-speaking sites generated by AI. Overproduction, incentive titles and repetitions : everything is there. So many signs that should make you suspicious.
But certain sites are more subtle and repeat articles from other online media verbatim. Like L’Observatoire de l’Europe, spotted by NewsGuard in its Reality Check newsletter. This site copies articles from Euronews, an online media linked to a European television channel. The similarities are striking, some articles are carbon copies, down to the letter. The latest: an article contradicting the rumor of the death of King Charles III. Initially published on the Euronews website on March 19, this article is found on L’Observatoire de l’Europe the same day, but this time a certain Jean Delaunay is presented as being the author.
The Europe Observatory is not its first attempt. In December 2023, he already reprinted an article from Euronews on ChatGPT and, even more recently, an article on St. Patrick’s Day was plagiarized. Each time “Jean Delaunay” appears as the signature, presented as founder of the site and professor of political science in several universities. “reputed”. After research, we found no significant trace of this pseudo-author.
Whether they plagiarize other media or spread false information, these sites abound. For Chine Labbé, the solution is “to educate companies so that they do not advertise this type of site, in order to minimize their visibility”. There is currently no legislation on this subject.