The American giant Meta will identify “in the coming months” any image generated by artificial intelligence (AI) which will be published on its social networks, it announced on Tuesday, a measure already implemented with its own generation tool of images.
“In the coming months, we will label images that users post to Facebook, Instagram and Threads when we can detect industry-standard indicators that they are AI-generated,” announced Nick Clegg, the head of international affairs at Meta, in a blog post.
If Meta has already implemented these labels on images created from its Meta AI tool since its launch in December, “we want to be able to do it also with content created with tools from other companies”, such as Google, OpenAI , Microsoft, Adobe, Midjourney, or even Shutterstock, he added.
“We are building this capability now and, in the coming months, we will begin applying labels in all languages supported by each application,” the executive further emphasized.
The announcement comes as the rise of generative AI raises fears that people could use these tools to sow political chaos, through disinformation or misinformation, in the run-up to several major elections that are taking place. will be held this year in various countries where almost half of the world’s population lives.
If Nick Clegg admits that this large-scale labeling, notably through invisible markers, “will not completely eliminate” the risk of producing false images (“deepfakes”), “it would certainly minimize” its proliferation “within the limits of what technology currently allows.
“It’s not perfect, the technology isn’t quite there yet, but it’s the most advanced attempt of any platform so far to provide meaningful transparency to billions of people across the world. world,” insisted Mr. Clegg to AFP.
“I really hope that by doing this and taking the lead, we will encourage the rest of the industry to work together and try to develop the common (technical) standards that we need,” continued the Meta executive. , which says it is ready to “share” its open technology “as widely as possible”.
The Californian company OpenAI, creator of ChatGPT, also announced in mid-January the launch of tools to combat disinformation, affirming its desire not to allow the use of its technological tools, including ChatGPT and the DALL image generator. E 3, for political purposes.