Seeing is no longer believing: how to protect democracy in the era of deepfake videos?

As we approach the 2024 US election, a dangerous new wave of artificial intelligence (AI)-generated misinformation is sweeping the digital landscape, raising the stakes higher than ever before.

In an age where information shapes public opinion, a crucial question looms: Can we trust the information that shapes our reality? Misinformation, ranging from fake news headlines—like a random Facebook post in which someone accuses a Haitian immigrant of stealing and eating his neighbor’s daughter’s friend’s cat—to deepfake videos (deepfakes) — like Elon Musk’s in cryptocurrency scams — has the potential to sow confusion, increase polarization and undermine the very foundations of democracy.

What is truly insidious about fake news and deepfakesit’s their exploitation of a key vulnerability in human psychology: people’s emotions. Studies show that when a person is emotionally charged, positively or negatively, they are more likely to share content without critically evaluating it.

According to a 2019 analysis, 8.5% of Facebook users shared at least one piece of fake news during the 2016 US election campaign. deepfakeswho manipulate the appearance of real people with uncanny precision, take this to the next level by blurring the line between truth and fiction.

Imagine a viral video of a public figure giving a divisive speech that later turns out to be fake. By the time the truth comes out, the damage is done — the emotional reaction has already deepened divisions, misled the public, spuriously generated support for a cause.

According to a recently published article by Forbesmore than half a million deepfake videos were circulating on social media in 2023, a figure that reflects the fact that platforms struggle to detect fake content quickly enough to prevent it from spreading virally. The rapid pace of social media consumption compounds this problem: the interactive nature of the platforms accelerates the speed at which these deepfakes are viewed and shared by users; they are done so in near real-time. As deep fakes become more sophisticated, they will inevitably become harder to detect and control, and lies will continue to spread faster than corrections can be made.

So what can we do to protect ourselves from their growing threat?

One promising solution is emotionally intelligent algorithms, AI systems designed to detect and de-rank manipulative content. These systems would learn to flag content that aims to deceive or emotionally manipulate users before it goes viral. While platforms like Facebook and X are making progress in this direction, this type of technology still lags behind the rapid evolution of deepfakesWhat we need are AI systems that can operate in real time, learning patterns of user engagement and detecting deep fakes as soon as they appear.

Another approach is blockchain technology (blockchain)which could offer a way to verify the authenticity of videos and images by creating an immutable record of their origins. Platforms could use this technology to ensure that users can trace content back to its source. Although still in development, verification by blockchain could play a role in distinguishing between the real and the deepfakes generated by AI.

Finally, stricter regulations and policies must be implemented, particularly regarding the creation and dissemination of deepfakes. California’s 2019 law banning deepfakes deepfakes designed to mislead citizens during election campaigns is a good start, but we need comprehensive, global legislation to truly tackle the problem. One solution could include requiring AI-generated content to be watermarked or digitally signed to distinguish between real and fake content.

Deep fakes pose a real threat to democratic processes. Emotionally intelligent algorithms and blockchain technology offer hope, but the solution ultimately lies in a combination of technology, education, and regulation.

Nobel Peace Prize winner Maria Ressa’s warning about the erosion of trust in media and institutions seems particularly acute today. As she so aptly put it: “Without facts, you can’t have truth. Without truth, you can’t have trust. Without trust, we have no shared reality, no democracy, and it becomes impossible to address the existential problems of our world.”

To see in video

source site-40