(Washington) A fake image showing an explosion at the Pentagon briefly went viral on Twitter on Monday, causing markets to slump for ten minutes and reigniting the debate around the risks of artificial intelligence (AI).
The fake photograph, apparently made with a generative AI program (capable of producing text and images from a simple plain language query), compelled the US Department of Defense to respond.
“We can confirm that this is false information and that the Pentagon was not attacked today,” a spokesperson said.
Firefighters in the area where the building is located (in Arlington, near Washington) also intervened to indicate on Twitter that no explosion or incident had taken place, neither at the Pentagon nor nearby.
The image appears to have caused markets to stall slightly for a few minutes, with the S&P 500 losing 0.29% from Friday before rallying.
“There was a drop in this misinformation when the machines picked it up,” noted Pat O’Hare of Briefing.com, referring to automated trading software that is programmed to react to postings on social networks.
“But the fact that she remained measured against the content of this false information suggests that others also considered it muddy,” he added.
An account from the QAnon conspiratorial movement was among the first to relay the false image, the source of which is not known.
The incident comes after several fake photographs produced with generative AI have been widely publicized to show the capabilities of this technology, such as that of the arrest of former US President Donald Trump or that of the Pope in a down jacket.
Software like DALL-E 2, Midjourney and Stable Diffusion allow amateurs to create convincing fake images without needing to master editing software like Photoshop.
But if generative AI facilitates the creation of false content, the problem of their dissemination and their virality – the most dangerous components of disinformation – falls to the platforms, regularly remind experts.
“Users are using these tools to generate content more efficiently than before […]but they are still spreading via social networks,” said Sam Altman, the boss of OpenAI (DALL-E, ChatGPT), during a congressional hearing in mid-May.