Seen from a certain angle, this wooded area really does look like a portrait of John Lennon. This upside down socket looks very expressive. This company with its smiling logo seems more friendly than its rivals. This may all seem mundane, until artificial intelligence gets involved.
We name pareidolia the act of detecting the features of a human face in landscapes, objects or images that have anything but the characteristics of a portrait. It’s a technique as old as time used in particular in marketing to make a business, a product or an advertising message more attractive. Think of all those family cars whose grille looks like a smiling face, those sports cars whose headlights are shaped like angry eyes or those vans that are shaped like a cyborg to give themselves a look. hi-tech…
Optical illusion
Moreover, the seemingly smiling logo cited above is that of Amazon. The yellow arrow that points from A to Z under its name also has the shape of a smirk, seeming to say: “Ah yes, but I know what you really need. » The almost human shape of wall electrical outlets allowed Hydro-Québec to build an advertising campaign for several years that enjoyed its fair share of popular success.
This landscape composed of trees, flowers and a corner of a pond which transforms, when we take a step back, into a surprisingly realistic portrait of a famous (or not) personality is no more true than ‘a talking socket. But we are seeing these images more and more often, which are shared on the Internet without much explanation.
In what corner of the world can we find such expressive decor? Nowhere. Or, perhaps, in the world of unicorns, where architects and landscapers are called Dall-E and Midjourney. Because if there is one area in which generative artificial intelligences excel, it is in the creation of sophisticated images like these. And that’s all it takes to create a new trend around the phenomenon of pareidolia.
Especially since in 2023 we do not need to have the talent of an artist like Salvador Dalí to produce trompe-l’oeil images. Just use the right words in the ChatGPT command prompt, in GPT-4 version, or in the case of Midjourney, the right chat channel in the Discord application.
The Discord application, by the way, is particularly popular among video game enthusiasts because it allows you to talk live, orally or in writing, with other players. Its logo has the shape of a large video game controller seen from above. Many also see the friendly, smiling face of a little android. Pareidolia is everywhere…
And naturally, when we see with what skill generative AIs are capable of producing trompe l’oeil images, and we know the effectiveness of this technique in producing the desired emotion from the public, we don’t dare imagine what they can do with words.
To go too far
While pareidolia is a fun effect of AI-produced images, there are other applications that are otherwise worrying. For example, brand new websites offer to completely strip people in photos submitted to them.
After the hyper-faked photos and videos, the famous deepfakeshere comes the “fake undressing”, or deep nudes. An AI analyzes the silhouette of a person under their clothes and produces an image in which it completely disappears. Already, there are dozens of sites or web applications that promise to “undress anyone online and absolutely free!” “, as one of these sites writes, which adds: “Here is the work of a unique AI algorithm which produces a nude image from photos of clothed women. »
Because obviously, these sites primarily offer to strip female bodies. Women of all ages are just beginning to realize the importance of not sharing photos of themselves in positions or situations that could harm them if they were ever published, and soon they will have to be advised to stop taking pictures of yourself, period.
Technology is a double-edged sword. Generative AI is perhaps the most obvious example, currently, of this term. We immediately see the benefits, and yet each new application seems to come with an ever-increasing risk of abuse or damage.
In the coming months, governments in many countries will eventually pass laws that will require the creators of these generative AIs to identify their technology to make them responsible for what they produce. Some of these companies have already promised to do so, without waiting for this legislative intervention.
Perhaps these measures will limit the emergence of AI applications which can clearly harm part of the population. But they make you realize that the problem in all this is perhaps not strictly technological in nature…