OpenAI seeks to counter disinformation in elections

The Californian company OpenAI, creator of ChatGPT, announced the launch of tools to combat disinformation before the dozens of elections planned this year in various countries where almost half of the world’s population lives.

The success of the ChatGPT application has popularized generative artificial intelligence but also raised concerns about its possible use in disinformation campaigns aimed at influencing voter voting.

While dozens of elections are planned this year, particularly in the United States, India and Great Britain, OpenAI on Monday underlined its desire not to allow the use of its technological tools, including ChatGPT and the generator of DALL-E 3 images, for political purposes.

“We want to be sure that our technology will not be used in a way that undermines” the democratic process, OpenAI explains in a blog.

“We are still evaluating the potential effectiveness of our tools for individual persuasion. Until we know more, we will not allow people to build applications for political or activist campaigns,” adds OpenAI.

Disinformation — false information spread intentionally — and misinformation — false information spread unintentionally — using AI could “undermine the legitimacy of newly elected governments,” the World Economic Forum, which runs until January 19 in Davos, Switzerland.

Fears of electoral disinformation date back several years, but the general public’s access to powerful AI-based text and image generators greatly increases the threat, experts say.

OpenAI said on Monday it was working on tools to reliably determine the provenance of text generated by ChatGPT and to give users the ability to detect whether an image was created using DALL-E 3.

“Early this year, we will implement the Coalition for Content Provenance and Authenticity’s Digital Credentials,” the company explains.

This coalition, called C2PA, will bring together Microsoft, Sony, Adobe, Nikkon and Canon.

According to OpenAI, ChatGPT, when asked about the American elections, will, for example, direct voters to recognized sites.

“The lessons learned from this work will be the basis of our approach for other countries and regions,” the company says, adding that DALL-E 3 contains “guardrails” to prevent users from generating images of real people , particularly candidates.

The American giants Google and Meta already announced initiatives last year to regulate the use of AI in political campaigns.

To watch on video


source site-41