US election misinformation gives TikTok a hard time

TikTok’s algorithms are very good at riveting users to their screens for hours on end. But they are less good at detecting ads with misinformation about US elections, according to a report.

Yet TikTok has banned all political ads from its platform since 2019.

The report raises new concerns about the hugely popular video-sharing app’s ability to detect election lies at a time when growing numbers of young people are using it not just for entertainment, but also for information. The non-profit organization Global Witness and New York University’s (NYU) Cybersecurity for Democracy team released the report on Friday.

Global Witness and NYU have tested whether some of the most popular social platforms – Facebook, YouTube and TikTok – can detect and remove fake political ads targeting US voters ahead of next month’s midterm elections. They have conducted similar tests in Myanmar, Ethiopia, Kenya and Brazil with ads containing hate speech and misinformation, but this is the first time they have done so in the United States.

The US ads included misinformation about the voting process, such as when and how people can vote, as well as how election results are counted. They were also designed to sow distrust of the democratic process by spreading baseless allegations that the vote was “rigged” or decided before Election Day. They were submitted for approval to social media platforms, but none were actually published.

TikTok lagging behind

TikTok, which is owned by Chinese company ByteDance, had the worst score, missing 90% of ads submitted by the group. Facebook fared better, detecting seven out of 20 fake ads, in English and Spanish.

Jon Lloyd, senior adviser at Global Witness, said the results for TikTok, in particular, were “a huge surprise” given that the platform has an outright ban on political advertising.

In a statement, TikTok reiterated that the platform prohibits election misinformation and paid political ads.

“We value feedback from NGOs, academics and other experts who help us continually strengthen our processes and policies,” the company added.

Facebook’s systems detected and removed the majority of ads submitted by Global Witness for approval.

“These reports were based on a very small sample of ads and are not representative given the number of political ads we review daily around the world,” Facebook said. Our ad review process involves multiple layers of analysis and detection, both before and after an ad goes live. »

He added that he is investing “significant resources” to protect the electoral process.

YouTube tops the list

YouTube, meanwhile, detected and removed all of the problematic ads, and even suspended the test account that Global Witness set up to post the fake ads in question. At the same time, the Alphabet-owned video platform did not detect any of the false or misleading election ads the group submitted for approval in Brazil.

“So it shows that there is a real global gap in their ability to enforce their own policies,” Lloyd said.

Google said it had “developed extensive measures to combat misinformation” on its platforms, including false claims about elections and voting.

“In 2021, we blocked or removed more than 3.4 billion ads for violating our policies, including 38 million for violating our misrepresentation policy,” the company said in a statement.

“We know how important it is to protect our users from this type of abuse – especially ahead of major elections like those in the United States and Brazil – and we continue to invest in and improve our enforcement systems to better detect and remove this content. »

To see in video


source site-48

Latest