Facebook | The moderation of false information in groups is reinforced

(San Francisco) Meta on Thursday added a tool that allows Facebook group administrators to automatically filter content that has been recognized as fake news, as the U.S. midterm elections approach, conducive to waves of misinformation on the social network.

Posted at 3:48 p.m.

“To ensure that content is more trustworthy […]group admins can automatically put messages containing information determined to be false by third-party verifiers on hold, so that they can review them before deleting them,” explained Tom Alison, the director of the Facebook application, in a press release. Thursday.

The platform had already given more tools to group leaders to better moderate content, but remains accused by many NGOs and authorities of not fighting enough against misinformation.

More than 1.8 billion people use Facebook Groups every month. Parents of students, fans of artists and neighbors meet there to exchange news and organize activities, but also to discuss politics.

Meta has been criticized for not sufficiently policing groups that have contributed to the political radicalization of certain individuals, particularly during the 2020 US elections.

AFP participates in around thirty countries in “Third party fact-checking”, a third-party verification program developed by Facebook since 2016. Around sixty media around the world, generalist or specialized, are also part of this program.

If something is diagnosed as false or misleading by one of these outlets, Facebook users are less likely to see it appear in their News Feed. And if they see it or try to share it, Facebook suggests that they read the verification article.


source site-55