(New York) The role of Facebook in the propagation in India of images and hate speech likely to exacerbate inter-community conflicts was again highlighted this weekend via the disclosure of internal documents by various American media.
Recovered by whistleblower Frances Haugen, these documents have already fueled several revelations about the impact of Facebook and its subsidiaries WhatsApp and Instagram on the intense polarization of political life in the United States or the psychological health of some teenage girls.
Saturday and Sunday, Wall Street Journal, the New York Times and the Washington post, among others, focused on Facebook’s presence in India, its largest market with 340 million users.
According to them, Mark Zuckerberg’s group was well aware of the growing presence of problematic content aimed in particular at the Muslim community, but did not deploy sufficient means to hamper this phenomenon.
This attitude is in the wake of what the whistleblower denounces more generally: Facebook knows, and studies, the problems but chooses, in large part, to ignore them or not to devote sufficient resources to contain them.
Account flooded with propaganda
A report from the company’s own researchers from July 2020 showed that the share of inflammatory content skyrocketed from December 2019 in India, says the Wall Street Journal.
“Rumors and calls for violence were particularly spread” on WhatsApp in February 2020, when clashes between the Hindu majority and the Muslim minority left dozens of people dead, the daily said.
Recognizing these issues, the group sent dozens of researchers into the field to talk to users.
Facebook had also in February 2019 created a fictitious account, that of a 21-year-old woman in northern India, to better understand the user experience, reports several media.
Without any indication, the account quickly found itself inundated with propaganda in favor of Hindu nationalist Prime Minister Narendra Modi and hate speech against Muslims.
“I have seen more images of the dead in the past three weeks than I have seen in my entire life,” wrote the researcher responsible for this experiment according to the New York Times.
The group is “well aware that a weaker moderation policy in non-English speaking countries makes the platform vulnerable to abuse by malicious people and authoritarian regimes,” said the group. Washington post.
According to an internal document, the vast majority of the budget dedicated to the fight against disinformation goes to the United States, even though the latter represents less than 10% of users.
0.05% of content
Reacting to these new revelations, the social media giant stresses that it has clearly stepped up its fight against problematic content in recent years.
Facebook has “invested significantly in technologies detecting hate speech in various languages, including Hindi and Bengali,” assured a spokesperson on Sunday in a message to AFP. It also has over 15,000 people monitoring content in over 70 languages, including 20 languages spoken in India.
Regularly criticized for being mainly concerned with content in English, the company also claims to be extending the automatic detection of problematic content to other languages spoken in India and claims to already have algorithms working in Hindi, Bengali, Tamil and Urdu.
The company has therefore halved the volume of such comments, which now represent only 0.05% of content worldwide, he added.
“Hate speech against marginalized groups, including Muslims, is on the increase around the world” and Facebook “is improving the implementation of its rules” as this development continues, the spokesperson also noted. word.
The group’s influence in India had already been singled out in 2020 after revelations from the Wall Street Journal accusing him of a certain complacency towards the Hindu nationalist power in order not to harm its commercial interests.