It is well known that social media amplifies fake news and other harmful content. The advocacy group Integrity Institute is now trying to establish exactly how much — and on Thursday it began publishing results that it plans to update weekly until the midterm elections in November 8.
Posted at 7:00 a.m.
The institute’s initial report, posted online, found that a ‘well-crafted lie’ drives more engagement than typical truthful content and that certain characteristics of social media sites and their algorithms help spread misinformation.
The analysis showed that Twitter has what the institute calls the biggest misinformation amplifying factor, largely due to its feature of allowing people to easily share, or “retweet,” posts. It is tracked by TikTok, the China-owned video site, which uses machine learning models to predict engagement and make recommendations to users.
“We see a difference for each platform because each has different mechanisms for virality,” said Jeff Allen, former integrity manager at Facebook, founder and research director of the Integrity Institute.
The more virality mechanisms there are on the platform, the more we see fake news getting increased distribution.
Jeff Allen, Founder and Research Director of the Integrity Institute
The institute calculated its results by comparing posts that members of the international fact-checking network identified as fake with the engagement of previous posts that were not reported by the same accounts. It analyzed nearly 600 verified messages in September on various topics, including the COVID-19 pandemic, the war in Ukraine and the upcoming election.
According to the sample studied by the institute, it is on Facebook that we find the most cases of disinformation, but the amplification of these claims is less, in part because relaying posts or messages requires more steps.
However, the institute found that some of its new features were more likely to amplify misinformation.
Facebook’s amplification factor for video content alone is closer to that of TikTok, according to the institute.
This is because the platform’s Reels and Facebook Watch, which are video features, “both rely heavily on algorithmic content recommendations” based on engagements, according to the institute’s calculations.
Instagram, which like Facebook is owned by Meta, had the lowest amplification rate. There was not yet enough data to make a statistically significant estimate for YouTube, according to the institute.
The institute plans to update its findings to track fluctuations in amplification, especially as the midterm elections approach. According to the institute’s report, false information is much more likely to be shared than purely factual content.
“Disinformation amplification can increase around critical events if disinformation narratives take hold,” the report said. “It can also decrease, if platforms implement design changes around the event that reduce the spread of disinformation. »
This article was originally published in The New York Times.