The images produced by the AIs have reached an unprecedented level of realism. Franceinfo gives you its advice to avoid pitfalls.
Emmanuel Macron picking up trash. Pope Francis wrapped up in a long white down jacket in the style of an American rapper. Barack Obama and Angela Merkel building sandcastles… These sometimes implausible images are widely shared on social networks, generating millions of views. Strikingly realistic, they were all produced by artificial intelligence.
DALL-E 2, Stable Diffusion, Midjourney… These image generators have performed prodigiously well in recent months. To imitate reality as closely as possible, the programs of these artificial intelligences train tirelessly. In computing, this is called “machine learning”. A technique by which an AI can learn, almost without human assistance, by ingesting astronomical amounts of data, for example real photographs.
“Next to 5.85 billion images are thus stored in the LAION database”, used for the Stable Diffusion application, illustrates Frédéric Jurie, professor at the University of Caen in computer vision. Eventually, he warns, “images produced by AI will no longer be discernible” reality. For now, there are still ways to spot them. Here are seven tips for detecting an artificially generated image.
1 Pay attention to the hands
Artificial intelligences have, for the moment, difficulty in reproducing certain parts of the human anatomy, in particular the hands. In this image produced by the Midjourney software featuring the arrest of Emmanuel Macron, the right hand of the Head of State has six fingers.
This detail is more easily spotted thanks to the magnifying glass tool of Invid, a web browser extension specialized in the detection of fake videos and images. Looking at the hands is generally the easiest way to identify the work of an AI, although this anomaly tends to be rare.
2 Observe signage and writings
Some images, like this photograph of Emmanuel Macron as a garbage collector, do not show any particular anatomical errors. On the other hand, it is easy to spot several anomalies in the whole image. On the trash can and on a white plastic bag, you can see indecipherable inscriptions. An AI like Midjourney rarely manages to correctly display the texts, which remain illegible or inconsistent. This is also visible on the false arrest of Emmanuel Macron (the word “police” on the helmets and uniforms does not appear anywhere written correctly).
Another common defect: signs and traffic lights. Whether it contains text or not, signage varies significantly from country to country. In the image above, the pedestrian green light in the background does not correspond to any known signage in France.
3 Look for distorted objects
Besides the anatomical representation issues, AIs also struggle to generate the objects. In this very viral photo on Twitter of Pope Francis in a white down jacket, the lenses of the glasses are deformed and he holds in his right hand a gourd which seems to have melted in a plastic bag.
4 Check for watermarks
To prevent attempts at misuse, some image generators place a watermark on their output. DALL-E 2 places a color bar at the bottom right of each generated image. Problem, this certification is far from being inviolable. “Make digital watermarking compulsory by law, i.e. a signature invisible to the naked eye written all over the image” could ultimately be a solution for identifying the work of an AI, advances Tina Nikoukhah, PhD in image processing at Paris-Saclay University.
5 Don’t rely too much on detection tools
Companies like Mayachitra have made artificial image detection tools available to the general public. But they are far from infallible, as franceinfo saw during its tests.
This photograph of an arrest of Donald Trump that has never happened to date, generated under Midjourney by the journalist Eliot Higginsthus appears in the Mayachitra tool as “probably not generated by an AI”.
In addition, the usual tools that make it possible to identify falsified images, such as the photo editing detection filters of the Invid plugin, are also ineffective against AIs. “These tools made it possible to know if a photo had been retouched locally”, recalls Tina Nikoukhah. However, AI-generated images are created entirely, pixel by pixel. They do not therefore bear any traces of local retouching that could betray them.
In the laboratory, during a study published in 2022, Italian researchers had tried to determine whether it was possible to detect artificially generated images. “Their conclusion is that if one knows in advance the model that was used to generate the images, it is possible to predict relatively well whether the image is real or generated, provided it has no been manipulated or compressed afterwards”explains Frédéric Jurie.
According to this study, an AI can be identified from the ‘noise’ it leaves on the images it generates. “Noise or grain is in a way the DNA of a photograph, decrypts Tina Nikoukhah. Since AI-generated images don’t go through the same manufacturing steps as a photo taken by a camera, they don’t produce the same noise. This may indeed be a clue to spot them.”
Will more efficient artificial image detection tools soon emerge? “I have little hope”, tempers Frédéric Jurie. First, because the detection models tested in the laboratory only work on the versions of AI for which they were programmed, “but new versions of AI are constantly being released, in a few weeks or even a few hours”, recalls the researcher. As each new version of an AI produces a different noise compared to its previous version, current detection tools are bound to quickly become obsolete. Then, nothing prevents an AI “to learn how an artificial image detector works to deceive it”underlines the professor.
6 Use reverse image search
Where does this photo of Barack Obama and Angela Merkel building a sandcastle come from, abundantly relayed on Twitter ? One way to find out is to use the Google Images reverse image search feature. In the results, the Facebook profile of the author of the image appears. This is “julian_ai_art”, an artist specializing in AI image generation.
On Facebook, we learn that the photograph was published by its creator on March 17.
7 Compare all picture details
The image showing Vladimir Putin kneeling in front of Xi Jinping may have confused some Internet users. It was published during a very real event: a meeting between the Russian and Chinese presidents in Moscow on March 20.
To prove the factitious nature of this image broadcast on Twitter, journalists from Radio Free Asia (RFA) sought other photographs of the encounter to compare their background. On official photos of the event, from a Russian state media, we note in particular that the furniture, white chairs, is different from that on the image broadcast on Twitter.
According to the specialists interviewed by franceinfo, most of the visual anomalies that can be identified today will probably be corrected in the next versions of the artificial intelligence image generators. “As AIs, by construction, will produce images indistinguishable from reality, photographs should not be blindly trusted”warns Frédéric Jurie. “But this is finally already true today. A photo, like any other information, is not enough in itself. The confidence that can be placed in it depends above all on the credibility of its source.”