To prevent voters from being misled by misleading messages, Meta (Facebook, Instagram) will require political campaigns to be transparent about the use of artificial intelligence (AI) in advertisements, a subject that generates much discussion of concern as the 2024 American presidential election approaches.
“Advertisers will be required to disclose whenever an election, political or social issue advertisement contains a photorealistic image or video, or realistic audio, that has been digitally created or altered to depict a real person saying or doing something she did not say or do,” the social media giant announced in a statement Wednesday.
This new rule will apply next year, worldwide.
It also covers advertisements that depict “a realistic-looking person who does not exist or a realistic-looking event that did not occur” or “a realistic event that would have occurred, but which is not not a faithful image, video or audio recording of the event.”
In all three cases, Meta will “add information about the ad”.
Advertisers will not need to report digital edits that do not impact the message, such as certain crops or color corrections to a photo.
Distinguishing real from AI
The rise of generative AI, which makes it possible to produce texts, images and sounds upon simple request in everyday language, facilitates the creation of all kinds of content, including “ deepfakes », these photos or videos manipulated for misleading purposes.
From Washington to Brussels, the authorities are trying to regulate this new technology, worried about the challenges for democracy, in particular.
US President Joe Biden signed a decree at the end of October which imposes rules and guidelines on companies in the sector on the security and use of their AI tools.
The 80-year-old Democrat mentioned seeing a video deepfake from him. “I wondered when could I have said that? “, he said, moved by the idea of people with bad intentions scamming families by pretending to be relatives.
The White House would particularly like companies to develop tools to easily identify content produced with AI.
Microsoft also presented on Wednesday a series of initiatives to “help protect elections”, including a tool for political candidates which will allow them to (digitally) watermark their content, and therefore authenticate it.
Disinformation
The IT group will also set up a team to help political campaign managers better understand the use of AI, as well as a center aimed at supporting “democratic governments around the world in the implementation of secure and resilient electoral processes”.
“Over the next 14 months, more than two billion people around the world will have the opportunity to vote in national elections,” said Brad Smith, President of Microsoft, in a press release.
Meta is already in the crosshairs of the authorities, from the protection of personal data to that of children.
Since the Cambridge Analytica and Facebook scandal, which helped win Donald Trump in the United States and Brexit supporters in the United Kingdom in 2016, the Californian group has taken numerous measures to combat disinformation on its platforms.
“As always, we remove content that violates our regulations, whether it was created by AI or by a person,” the company recalled on Wednesday.
“Our independent fact-checking partners review and evaluate viral misinformation and we do not allow an ad to run if it is classified as false, altered, partially false, or lacking context.”
AFP is one of dozens of media outlets paid by Meta around the world, under its content verification program.