Lensa’s “magical avatars” and the normalization of fake news

ChatGPT, Lensa AI, GAN… Artificial intelligence is crossing a new frontier these days, which reinforces its presence in our daily lives, but which also raises essential ethical and social questions. This is particularly the case with generators of fake portraits and stylized portraits like the Lensa AI app, which is getting a lot of talk these days.

All those people on social networks whose profile picture has looked like a Hollywood star on a space mission for the past few days? They use the Lensa AI photo editing app. It is causing a stir these days with its simplicity and efficiency. It also contributes to trivializing the fake news very present on these same networks.

A picture is worth a thousand words, they say. These words are sometimes misleading. “The veracity that the public gives to these artificial portraits makes this same public more likely to consider them as reliable sources of information”, observes the professor of psychology of the University of London Manos Tsakiris, in a study published last week. in the scientific journal iScience.

Ironically, the Lensa AI application sits at the top of downloads in mobile application stores these days as ChatGPT emerges, a conversational AI that promises to disrupt the way we interact with digital interfaces. Apps such as Lensa AI have sprung up in recent months and produce not only stylized portraits, but also hyperrealistic portraits out of nowhere, thanks to an algorithm called Generative Adversarial Network, or GAN, as it’s acronymed.

Fake portraits are more likely to be perceived as real than real portraits, and participants [à l’étude] are more likely to identify with portraits that they have determined to be real.

Whether or not they have a very niche and precise function like Lensa AI, these applications, among which we can also include DALL-E and Stable Diffusion, are in a way the image of what ChatGPT is to text. Naturally, they worry.

Professor Manos Tsakiris and three associated researchers studied the impact of these fake portraits produced by AI on public perception. They worry about two things. First, Internet users rely on the words conveyed by these avatars which have no real existence. Then, once they become aware of the lure, it is their confidence in any form of information that is eroded.

The problem is that the fake portraits become so detailed that they are generally perceived as more realistic by a good part of the population. “Fake portraits are more likely to be perceived as real than real portraits, and participants [à l’étude] are more likely to identify with portraits that they have determined to be real”, notes Manos Tsakiris.

“The high likelihood of GAN portraits being perceived as more real than real people illustrates the power of fake news technologies. »

The lure of the fake

In Lensa’s terms of use, its creator, Prisma Labs, grants itself the right to use information produced by its users in other contexts. The lessons learned during the creation of its popular “magical avatars” on TikTok and elsewhere could in particular help to strengthen its own algorithms for generating portraits.

To generate their own avatars, the user must upload at least a dozen of their own portraits to the application, which are deposited on Prisma’s servers. They are analyzed and used to produce new portraits worked in the manner of different artistic or stylistic currents. Lensa swears that the photos she receives are then deleted from her servers, but that doesn’t stop her from keeping everything else: the way the images are reworked, then the final product.

There is a lot of value in this data. Probably more than in the original photos themselves, given the growing popularity of fake portraits in many fields, and not always for the wrong reasons. Marketing agencies, news media and, above all, political parties love them because they greatly facilitate their job of personalizing advertising messages.

“There are very serious consequences to this widespread use of false portraits while this image culture is reaching unprecedented levels, worries Professor Manos Tsakiris. These seemingly realistic faces of non-existent people are increasingly being used in marketing, journalism, on social media, and for political propaganda purposes. »

More than ever, we are asked to judge the veracity of our social environment, continues the researcher. More than ever, this veracity is abused by technologies tailor-made to be misleading. The public is therefore encouraged to be constantly more vigilant in the face of the messages addressed to them.

The problem is that by distrusting everything, the public ends up not believing anything anymore. This is what the second part of the study by the University of London tends to demonstrate. If low digital literacy makes people more naive, those who, on the contrary, have a good knowledge of digital issues tend to reject everything altogether and withdraw from any conversation.

In short, these technological advances work in favor of disinformation, conclude the researchers: “More research will be needed to understand how the public can be made more resistant to online disinformation and malicious content. »

To see in video


source site-47