(New York) The most advanced companies in generative artificial intelligence (AI) will commit to developing new techniques to identify disinformation content using AI during elections.
Meta, Microsoft, Google, OpenAI, TikTok, Adobe, “and others” are finalizing a pact to be unveiled Friday on the sidelines of the major Munich Security Conference (MSC), a spokesperson said Thursday. – Meta’s speech to AFP.
“In this crucial electoral year”, with major votes planned in the United States, the United Kingdom, India, the European Union and Russia, “technology companies are working together to combat the use of AI to mislead voters,” according to the spokesperson.
According to Washington Postwhich was the first to report this agreement, the partners will work on tools to identify misleading content, identify it as such for users and control it.
So-called generative AI makes it possible to create, upon simple request in everyday language, text, images, sound files or videos, capable of passing off documents generated from scratch as authentic.
Thanks to archives, the technology can notably produce “deepfakes”, documents which show a person saying or doing something when it never happened.
Several “deepfakes” have made headlines in recent weeks, including a fake telephone message from US President Joe Biden before the New Hampshire Democratic primary at the end of January.
On Saturday, former Pakistani Prime Minister Imran Khan’s party, the Pakistan Tehreek-e-Insaf (PTI), released a video message using generative AI and featuring its leader, currently incarcerated and unable to return. express themselves publicly.
Meta, Google, Microsoft and OpenAI have already implemented a watermark system which allows the identification of content created using generative AI.
In June, Florida governor and Republican primary candidate Ron DeSantis posted images of Donald Trump hugging his former adviser on COVID-19, Anthony Fauci.
AFP journalists established that these were images generated using artificial intelligence.